diff --git a/.changelog/96979b8ab9d04d28990dfe28ae1aca84.json b/.changelog/96979b8ab9d04d28990dfe28ae1aca84.json
new file mode 100644
index 00000000000..8a8a25a3b83
--- /dev/null
+++ b/.changelog/96979b8ab9d04d28990dfe28ae1aca84.json
@@ -0,0 +1,15 @@
+{
+ "id": "96979b8a-b9d0-4d28-990d-fe28ae1aca84",
+ "type": "bugfix",
+ "collapse": true,
+ "description": "Fix improper use of printf-style functions.",
+ "modules": [
+ ".",
+ "config",
+ "credentials",
+ "feature/s3/manager",
+ "internal/endpoints/v2",
+ "service/kinesis/internal/testing",
+ "service/transcribestreaming/internal/testing"
+ ]
+}
\ No newline at end of file
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 800f23dd6c9..9a0d9c8a104 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,3 +1,100 @@
+# Release (2024-12-18)
+
+## Module Highlights
+* `github.com/aws/aws-sdk-go-v2/service/amplify`: [v1.28.0](service/amplify/CHANGELOG.md#v1280-2024-12-18)
+ * **Feature**: Added WAF Configuration to Amplify Apps
+* `github.com/aws/aws-sdk-go-v2/service/budgets`: [v1.29.0](service/budgets/CHANGELOG.md#v1290-2024-12-18)
+ * **Feature**: Releasing minor partition endpoint updates
+* `github.com/aws/aws-sdk-go-v2/service/connect`: [v1.122.0](service/connect/CHANGELOG.md#v11220-2024-12-18)
+ * **Feature**: This release adds support for the UpdateParticipantAuthentication API used for customer authentication within Amazon Connect chats.
+* `github.com/aws/aws-sdk-go-v2/service/connectparticipant`: [v1.28.0](service/connectparticipant/CHANGELOG.md#v1280-2024-12-18)
+ * **Feature**: This release adds support for the GetAuthenticationUrl and CancelParticipantAuthentication APIs used for customer authentication within Amazon Connect chats. There are also minor updates to the GetAttachment API.
+* `github.com/aws/aws-sdk-go-v2/service/datasync`: [v1.44.0](service/datasync/CHANGELOG.md#v1440-2024-12-18)
+ * **Feature**: AWS DataSync introduces the ability to update attributes for in-cloud locations.
+* `github.com/aws/aws-sdk-go-v2/service/iot`: [v1.62.0](service/iot/CHANGELOG.md#v1620-2024-12-18)
+ * **Feature**: Release connectivity status query API which is a dedicated high throughput(TPS) API to query a specific device's most recent connectivity state and metadata.
+* `github.com/aws/aws-sdk-go-v2/service/mwaa`: [v1.33.2](service/mwaa/CHANGELOG.md#v1332-2024-12-18)
+ * **Documentation**: Added support for Apache Airflow version 2.10.3 to MWAA.
+* `github.com/aws/aws-sdk-go-v2/service/quicksight`: [v1.82.0](service/quicksight/CHANGELOG.md#v1820-2024-12-18)
+ * **Feature**: Add support for PerformanceConfiguration attribute to Dataset entity. Allow PerformanceConfiguration specification in CreateDataset and UpdateDataset APIs.
+* `github.com/aws/aws-sdk-go-v2/service/resiliencehub`: [v1.29.0](service/resiliencehub/CHANGELOG.md#v1290-2024-12-18)
+ * **Feature**: AWS Resilience Hub now automatically detects already configured CloudWatch alarms and FIS experiments as part of the assessment process and returns the discovered resources in the corresponding list API responses. It also allows you to include or exclude test recommendations for an AppComponent.
+* `github.com/aws/aws-sdk-go-v2/service/transfer`: [v1.55.0](service/transfer/CHANGELOG.md#v1550-2024-12-18)
+ * **Feature**: Added AS2 agreement configurations to control filename preservation and message signing enforcement. Added AS2 connector configuration to preserve content type from S3 objects.
+
+# Release (2024-12-17)
+
+## Module Highlights
+* `github.com/aws/aws-sdk-go-v2/service/account`: [v1.22.0](service/account/CHANGELOG.md#v1220-2024-12-17)
+ * **Feature**: Update endpoint configuration.
+* `github.com/aws/aws-sdk-go-v2/service/backup`: [v1.40.0](service/backup/CHANGELOG.md#v1400-2024-12-17)
+ * **Feature**: Add Support for Backup Indexing
+* `github.com/aws/aws-sdk-go-v2/service/backupsearch`: [v1.0.0](service/backupsearch/CHANGELOG.md#v100-2024-12-17)
+ * **Release**: New AWS service client module
+ * **Feature**: Add support for searching backups
+* `github.com/aws/aws-sdk-go-v2/service/batch`: [v1.49.0](service/batch/CHANGELOG.md#v1490-2024-12-17)
+ * **Feature**: This feature allows AWS Batch on Amazon EKS to support configuration of Pod Annotations, overriding Namespace on which the Batch job's Pod runs on, and allows Subpath and Persistent Volume claim to be set for AWS Batch on Amazon EKS jobs.
+* `github.com/aws/aws-sdk-go-v2/service/cleanroomsml`: [v1.11.0](service/cleanroomsml/CHANGELOG.md#v1110-2024-12-17)
+ * **Feature**: Add support for SQL compute configuration for StartAudienceGenerationJob API.
+* `github.com/aws/aws-sdk-go-v2/service/cloudfront`: [v1.44.0](service/cloudfront/CHANGELOG.md#v1440-2024-12-17)
+ * **Feature**: Adds support for OriginReadTimeout and OriginKeepaliveTimeout to create CloudFront Distributions with VPC Origins.
+* `github.com/aws/aws-sdk-go-v2/service/codepipeline`: [v1.38.0](service/codepipeline/CHANGELOG.md#v1380-2024-12-17)
+ * **Feature**: AWS CodePipeline V2 type pipelines now support Managed Compute Rule.
+* `github.com/aws/aws-sdk-go-v2/service/ecs`: [v1.53.0](service/ecs/CHANGELOG.md#v1530-2024-12-17)
+ * **Feature**: Added support for enableFaultInjection task definition parameter which can be used to enable Fault Injection feature on ECS tasks.
+* `github.com/aws/aws-sdk-go-v2/service/m2`: [v1.19.0](service/m2/CHANGELOG.md#v1190-2024-12-17)
+ * **Feature**: This release adds support for AWS Mainframe Modernization(M2) Service to allow specifying network type(ipv4, dual) for the environment instances. For dual network type, m2 environment applications will serve both IPv4 and IPv6 requests, whereas for ipv4 it will serve only IPv4 requests.
+* `github.com/aws/aws-sdk-go-v2/service/synthetics`: [v1.31.0](service/synthetics/CHANGELOG.md#v1310-2024-12-17)
+ * **Feature**: Add support to toggle outbound IPv6 traffic on canaries connected to dualstack subnets. This behavior can be controlled via the new Ipv6AllowedForDualStack parameter of the VpcConfig input object in CreateCanary and UpdateCanary APIs.
+
+# Release (2024-12-16)
+
+## Module Highlights
+* `github.com/aws/aws-sdk-go-v2/feature/dsql/auth`: [v1.0.0](feature/dsql/auth/CHANGELOG.md#v100-2024-12-16)
+ * **Release**: Add Aurora DSQL Auth Token Generator
+* `github.com/aws/aws-sdk-go-v2/service/cloud9`: [v1.28.8](service/cloud9/CHANGELOG.md#v1288-2024-12-16)
+ * **Documentation**: Added information about Ubuntu 18.04 will be removed from the available imageIds for Cloud9 because Ubuntu 18.04 has ended standard support on May 31, 2023.
+* `github.com/aws/aws-sdk-go-v2/service/dlm`: [v1.29.0](service/dlm/CHANGELOG.md#v1290-2024-12-16)
+ * **Feature**: This release adds support for Local Zones in Amazon Data Lifecycle Manager EBS snapshot lifecycle policies.
+* `github.com/aws/aws-sdk-go-v2/service/ec2`: [v1.198.0](service/ec2/CHANGELOG.md#v11980-2024-12-16)
+ * **Feature**: This release adds support for EBS local snapshots in AWS Dedicated Local Zones, which allows you to store snapshots of EBS volumes locally in Dedicated Local Zones.
+* `github.com/aws/aws-sdk-go-v2/service/greengrassv2`: [v1.36.0](service/greengrassv2/CHANGELOG.md#v1360-2024-12-16)
+ * **Feature**: Add support for runtime in GetCoreDevice and ListCoreDevices APIs.
+* `github.com/aws/aws-sdk-go-v2/service/medialive`: [v1.64.0](service/medialive/CHANGELOG.md#v1640-2024-12-16)
+ * **Feature**: AWS Elemental MediaLive adds three new features: MediaPackage v2 endpoint support for live stream delivery, KLV metadata passthrough in CMAF Ingest output groups, and Metadata Name Modifier in CMAF Ingest output groups for customizing metadata track names in output streams.
+* `github.com/aws/aws-sdk-go-v2/service/rds`: [v1.93.0](service/rds/CHANGELOG.md#v1930-2024-12-16)
+ * **Feature**: This release adds support for the "MYSQL_CACHING_SHA2_PASSWORD" enum value for RDS Proxy ClientPasswordAuthType.
+
+# Release (2024-12-13)
+
+## Module Highlights
+* `github.com/aws/aws-sdk-go-v2/service/cloudhsmv2`: [v1.28.0](service/cloudhsmv2/CHANGELOG.md#v1280-2024-12-13)
+ * **Feature**: Add support for Dual-Stack hsm2m.medium clusters. The customers will now be able to create hsm2m.medium clusters having both IPv4 and IPv6 connection capabilities by specifying a new param called NetworkType=DUALSTACK during cluster creation.
+* `github.com/aws/aws-sdk-go-v2/service/ec2`: [v1.197.0](service/ec2/CHANGELOG.md#v11970-2024-12-13)
+ * **Feature**: This release adds GroupId to the response for DeleteSecurityGroup.
+* `github.com/aws/aws-sdk-go-v2/service/eks`: [v1.54.0](service/eks/CHANGELOG.md#v1540-2024-12-13)
+ * **Feature**: Add NodeRepairConfig in CreateNodegroupRequest and UpdateNodegroupConfigRequest
+* `github.com/aws/aws-sdk-go-v2/service/mediaconnect`: [v1.36.0](service/mediaconnect/CHANGELOG.md#v1360-2024-12-13)
+ * **Feature**: AWS Elemental MediaConnect Gateway now supports Source Specific Multicast (SSM) for ingress bridges. This enables you to specify a source IP address in addition to a multicast IP when creating or updating an ingress bridge source.
+* `github.com/aws/aws-sdk-go-v2/service/networkmanager`: [v1.32.2](service/networkmanager/CHANGELOG.md#v1322-2024-12-13)
+ * **Documentation**: There was a sentence fragment in UpdateDirectConnectGatewayAttachment that was causing customer confusion as to whether it's an incomplete sentence or if it was a typo. Removed the fragment.
+* `github.com/aws/aws-sdk-go-v2/service/servicediscovery`: [v1.34.0](service/servicediscovery/CHANGELOG.md#v1340-2024-12-13)
+ * **Feature**: AWS Cloud Map now supports service-level attributes, allowing you to associate custom metadata directly with services. These attributes can be retrieved, updated, and deleted using the new GetServiceAttributes, UpdateServiceAttributes, and DeleteServiceAttributes API calls.
+
+# Release (2024-12-12)
+
+## Module Highlights
+* `github.com/aws/aws-sdk-go-v2/service/connect`: [v1.121.0](service/connect/CHANGELOG.md#v11210-2024-12-12)
+ * **Feature**: Configure holidays and other overrides to hours of operation in advance. During contact handling, Amazon Connect automatically checks for overrides and provides customers with an appropriate flow path. After an override period passes call center automatically reverts to standard hours of operation.
+* `github.com/aws/aws-sdk-go-v2/service/databasemigrationservice`: [v1.45.0](service/databasemigrationservice/CHANGELOG.md#v1450-2024-12-12)
+ * **Feature**: Add parameters to support for kerberos authentication. Add parameter for disabling the Unicode source filter with PostgreSQL settings. Add parameter to use large integer value with Kinesis/Kafka settings.
+* `github.com/aws/aws-sdk-go-v2/service/glue`: [v1.104.0](service/glue/CHANGELOG.md#v11040-2024-12-12)
+ * **Feature**: To support customer-managed encryption in Data Quality to allow customers encrypt data with their own KMS key, we will add a DataQualityEncryption field to the SecurityConfiguration API where customers can provide their KMS keys.
+* `github.com/aws/aws-sdk-go-v2/service/guardduty`: [v1.52.1](service/guardduty/CHANGELOG.md#v1521-2024-12-12)
+ * **Documentation**: Improved descriptions for certain APIs.
+* `github.com/aws/aws-sdk-go-v2/service/route53domains`: [v1.28.0](service/route53domains/CHANGELOG.md#v1280-2024-12-12)
+ * **Feature**: This release includes the following API updates: added the enumeration type RESTORE_DOMAIN to the OperationType; constrained the Price attribute to non-negative values; updated the LangCode to allow 2 or 3 alphabetical characters.
+
# Release (2024-12-11)
## General Highlights
diff --git a/codegen/sdk-codegen/aws-models/account.json b/codegen/sdk-codegen/aws-models/account.json
index 95a0177d57b..09b328e8628 100644
--- a/codegen/sdk-codegen/aws-models/account.json
+++ b/codegen/sdk-codegen/aws-models/account.json
@@ -123,6 +123,9 @@
"aws.auth#sigv4": {
"name": "account"
},
+ "aws.endpoints#standardPartitionalEndpoints": {
+ "endpointPatternType": "service_region_dnsSuffix"
+ },
"aws.protocols#restJson1": {},
"smithy.api#cors": {},
"smithy.api#documentation": "
Operations for Amazon Web Services Account Management
",
@@ -138,12 +141,6 @@
"smithy.rules#endpointRuleSet": {
"version": "1.0",
"parameters": {
- "Region": {
- "builtIn": "AWS::Region",
- "required": false,
- "documentation": "The AWS region used to dispatch the request.",
- "type": "String"
- },
"UseDualStack": {
"builtIn": "AWS::UseDualStack",
"required": true,
@@ -163,6 +160,12 @@
"required": false,
"documentation": "Override the endpoint used to send this request",
"type": "String"
+ },
+ "Region": {
+ "builtIn": "AWS::Region",
+ "required": false,
+ "documentation": "The AWS region used to dispatch the request.",
+ "type": "String"
}
},
"rules": [
@@ -194,263 +197,235 @@
"type": "error"
},
{
- "conditions": [
+ "conditions": [],
+ "rules": [
{
- "fn": "booleanEquals",
- "argv": [
+ "conditions": [
{
- "ref": "UseDualStack"
+ "fn": "booleanEquals",
+ "argv": [
+ {
+ "ref": "UseDualStack"
+ },
+ true
+ ]
+ }
+ ],
+ "error": "Invalid Configuration: Dualstack and custom endpoint are not supported",
+ "type": "error"
+ },
+ {
+ "conditions": [],
+ "endpoint": {
+ "url": {
+ "ref": "Endpoint"
},
- true
- ]
+ "properties": {},
+ "headers": {}
+ },
+ "type": "endpoint"
}
],
- "error": "Invalid Configuration: Dualstack and custom endpoint are not supported",
- "type": "error"
- },
- {
- "conditions": [],
- "endpoint": {
- "url": {
- "ref": "Endpoint"
- },
- "properties": {},
- "headers": {}
- },
- "type": "endpoint"
+ "type": "tree"
}
],
"type": "tree"
},
{
- "conditions": [
- {
- "fn": "isSet",
- "argv": [
- {
- "ref": "Region"
- }
- ]
- }
- ],
+ "conditions": [],
"rules": [
{
"conditions": [
{
- "fn": "aws.partition",
+ "fn": "isSet",
"argv": [
{
"ref": "Region"
}
- ],
- "assign": "PartitionResult"
+ ]
}
],
"rules": [
{
"conditions": [
{
- "fn": "stringEquals",
+ "fn": "aws.partition",
"argv": [
{
- "fn": "getAttr",
- "argv": [
- {
- "ref": "PartitionResult"
- },
- "name"
- ]
- },
- "aws"
- ]
- },
- {
- "fn": "booleanEquals",
- "argv": [
- {
- "ref": "UseFIPS"
- },
- false
- ]
- },
- {
- "fn": "booleanEquals",
- "argv": [
- {
- "ref": "UseDualStack"
- },
- false
- ]
+ "ref": "Region"
+ }
+ ],
+ "assign": "PartitionResult"
}
],
- "endpoint": {
- "url": "https://account.us-east-1.amazonaws.com",
- "properties": {
- "authSchemes": [
- {
- "name": "sigv4",
- "signingName": "account",
- "signingRegion": "us-east-1"
- }
- ]
- },
- "headers": {}
- },
- "type": "endpoint"
- },
- {
- "conditions": [
+ "rules": [
{
- "fn": "stringEquals",
- "argv": [
+ "conditions": [
{
- "fn": "getAttr",
+ "fn": "booleanEquals",
"argv": [
{
- "ref": "PartitionResult"
+ "ref": "UseFIPS"
},
- "name"
+ true
]
},
- "aws-cn"
- ]
- },
- {
- "fn": "booleanEquals",
- "argv": [
{
- "ref": "UseFIPS"
- },
- false
- ]
- },
- {
- "fn": "booleanEquals",
- "argv": [
+ "fn": "booleanEquals",
+ "argv": [
+ {
+ "ref": "UseDualStack"
+ },
+ true
+ ]
+ }
+ ],
+ "rules": [
{
- "ref": "UseDualStack"
+ "conditions": [
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ true,
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "supportsFIPS"
+ ]
+ }
+ ]
+ },
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ true,
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "supportsDualStack"
+ ]
+ }
+ ]
+ }
+ ],
+ "rules": [
+ {
+ "conditions": [],
+ "endpoint": {
+ "url": "https://account-fips.{PartitionResult#implicitGlobalRegion}.{PartitionResult#dualStackDnsSuffix}",
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "{PartitionResult#implicitGlobalRegion}"
+ }
+ ]
+ },
+ "headers": {}
+ },
+ "type": "endpoint"
+ }
+ ],
+ "type": "tree"
},
- false
- ]
- }
- ],
- "endpoint": {
- "url": "https://account.cn-northwest-1.amazonaws.com.cn",
- "properties": {
- "authSchemes": [
{
- "name": "sigv4",
- "signingName": "account",
- "signingRegion": "cn-northwest-1"
+ "conditions": [],
+ "error": "FIPS and DualStack are enabled, but this partition does not support one or both",
+ "type": "error"
}
- ]
- },
- "headers": {}
- },
- "type": "endpoint"
- },
- {
- "conditions": [
- {
- "fn": "booleanEquals",
- "argv": [
- {
- "ref": "UseFIPS"
- },
- true
- ]
+ ],
+ "type": "tree"
},
- {
- "fn": "booleanEquals",
- "argv": [
- {
- "ref": "UseDualStack"
- },
- true
- ]
- }
- ],
- "rules": [
{
"conditions": [
{
"fn": "booleanEquals",
"argv": [
- true,
{
- "fn": "getAttr",
- "argv": [
- {
- "ref": "PartitionResult"
- },
- "supportsFIPS"
- ]
- }
+ "ref": "UseFIPS"
+ },
+ true
]
},
{
"fn": "booleanEquals",
"argv": [
- true,
{
- "fn": "getAttr",
+ "ref": "UseDualStack"
+ },
+ false
+ ]
+ }
+ ],
+ "rules": [
+ {
+ "conditions": [
+ {
+ "fn": "booleanEquals",
"argv": [
{
- "ref": "PartitionResult"
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "supportsFIPS"
+ ]
},
- "supportsDualStack"
+ true
]
}
- ]
- }
- ],
- "rules": [
+ ],
+ "rules": [
+ {
+ "conditions": [],
+ "endpoint": {
+ "url": "https://account-fips.{PartitionResult#implicitGlobalRegion}.{PartitionResult#dnsSuffix}",
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "{PartitionResult#implicitGlobalRegion}"
+ }
+ ]
+ },
+ "headers": {}
+ },
+ "type": "endpoint"
+ }
+ ],
+ "type": "tree"
+ },
{
"conditions": [],
- "endpoint": {
- "url": "https://account-fips.{Region}.{PartitionResult#dualStackDnsSuffix}",
- "properties": {},
- "headers": {}
- },
- "type": "endpoint"
+ "error": "FIPS is enabled but this partition does not support FIPS",
+ "type": "error"
}
],
"type": "tree"
},
{
- "conditions": [],
- "error": "FIPS and DualStack are enabled, but this partition does not support one or both",
- "type": "error"
- }
- ],
- "type": "tree"
- },
- {
- "conditions": [
- {
- "fn": "booleanEquals",
- "argv": [
+ "conditions": [
{
- "ref": "UseFIPS"
+ "fn": "booleanEquals",
+ "argv": [
+ {
+ "ref": "UseFIPS"
+ },
+ false
+ ]
},
- true
- ]
- }
- ],
- "rules": [
- {
- "conditions": [
{
"fn": "booleanEquals",
"argv": [
{
- "fn": "getAttr",
- "argv": [
- {
- "ref": "PartitionResult"
- },
- "supportsFIPS"
- ]
+ "ref": "UseDualStack"
},
true
]
@@ -458,127 +433,130 @@
],
"rules": [
{
- "conditions": [],
- "endpoint": {
- "url": "https://account-fips.{Region}.{PartitionResult#dnsSuffix}",
- "properties": {},
- "headers": {}
- },
- "type": "endpoint"
- }
- ],
- "type": "tree"
- },
- {
- "conditions": [],
- "error": "FIPS is enabled but this partition does not support FIPS",
- "type": "error"
- }
- ],
- "type": "tree"
- },
- {
- "conditions": [
- {
- "fn": "booleanEquals",
- "argv": [
- {
- "ref": "UseDualStack"
- },
- true
- ]
- }
- ],
- "rules": [
- {
- "conditions": [
- {
- "fn": "booleanEquals",
- "argv": [
- true,
+ "conditions": [
{
- "fn": "getAttr",
+ "fn": "booleanEquals",
"argv": [
+ true,
{
- "ref": "PartitionResult"
- },
- "supportsDualStack"
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "supportsDualStack"
+ ]
+ }
]
}
- ]
- }
- ],
- "rules": [
+ ],
+ "rules": [
+ {
+ "conditions": [],
+ "endpoint": {
+ "url": "https://account.{PartitionResult#implicitGlobalRegion}.{PartitionResult#dualStackDnsSuffix}",
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "{PartitionResult#implicitGlobalRegion}"
+ }
+ ]
+ },
+ "headers": {}
+ },
+ "type": "endpoint"
+ }
+ ],
+ "type": "tree"
+ },
{
"conditions": [],
- "endpoint": {
- "url": "https://account.{Region}.{PartitionResult#dualStackDnsSuffix}",
- "properties": {},
- "headers": {}
- },
- "type": "endpoint"
+ "error": "DualStack is enabled but this partition does not support DualStack",
+ "type": "error"
}
],
"type": "tree"
},
{
"conditions": [],
- "error": "DualStack is enabled but this partition does not support DualStack",
- "type": "error"
+ "endpoint": {
+ "url": "https://account.{PartitionResult#implicitGlobalRegion}.{PartitionResult#dnsSuffix}",
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "{PartitionResult#implicitGlobalRegion}"
+ }
+ ]
+ },
+ "headers": {}
+ },
+ "type": "endpoint"
}
],
"type": "tree"
- },
- {
- "conditions": [],
- "endpoint": {
- "url": "https://account.{Region}.{PartitionResult#dnsSuffix}",
- "properties": {},
- "headers": {}
- },
- "type": "endpoint"
}
],
"type": "tree"
+ },
+ {
+ "conditions": [],
+ "error": "Invalid Configuration: Missing Region",
+ "type": "error"
}
],
"type": "tree"
- },
- {
- "conditions": [],
- "error": "Invalid Configuration: Missing Region",
- "type": "error"
}
]
},
"smithy.rules#endpointTests": {
"testCases": [
{
- "documentation": "For region aws-global with FIPS disabled and DualStack disabled",
+ "documentation": "For custom endpoint with region not set and fips disabled",
"expect": {
"endpoint": {
- "properties": {
- "authSchemes": [
- {
- "name": "sigv4",
- "signingName": "account",
- "signingRegion": "us-east-1"
- }
- ]
- },
- "url": "https://account.us-east-1.amazonaws.com"
+ "url": "https://example.com"
}
},
"params": {
- "Region": "aws-global",
+ "Endpoint": "https://example.com",
+ "UseFIPS": false
+ }
+ },
+ {
+ "documentation": "For custom endpoint with fips enabled",
+ "expect": {
+ "error": "Invalid Configuration: FIPS and custom endpoint are not supported"
+ },
+ "params": {
+ "Endpoint": "https://example.com",
+ "UseFIPS": true
+ }
+ },
+ {
+ "documentation": "For custom endpoint with fips disabled and dualstack enabled",
+ "expect": {
+ "error": "Invalid Configuration: Dualstack and custom endpoint are not supported"
+ },
+ "params": {
+ "Endpoint": "https://example.com",
"UseFIPS": false,
- "UseDualStack": false
+ "UseDualStack": true
}
},
{
"documentation": "For region us-east-1 with FIPS enabled and DualStack enabled",
"expect": {
"endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-east-1"
+ }
+ ]
+ },
"url": "https://account-fips.us-east-1.api.aws"
}
},
@@ -592,6 +570,14 @@
"documentation": "For region us-east-1 with FIPS enabled and DualStack disabled",
"expect": {
"endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-east-1"
+ }
+ ]
+ },
"url": "https://account-fips.us-east-1.amazonaws.com"
}
},
@@ -605,6 +591,14 @@
"documentation": "For region us-east-1 with FIPS disabled and DualStack enabled",
"expect": {
"endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-east-1"
+ }
+ ]
+ },
"url": "https://account.us-east-1.api.aws"
}
},
@@ -622,7 +616,6 @@
"authSchemes": [
{
"name": "sigv4",
- "signingName": "account",
"signingRegion": "us-east-1"
}
]
@@ -637,75 +630,76 @@
}
},
{
- "documentation": "For region aws-cn-global with FIPS disabled and DualStack disabled",
+ "documentation": "For region cn-northwest-1 with FIPS enabled and DualStack enabled",
"expect": {
"endpoint": {
"properties": {
"authSchemes": [
{
"name": "sigv4",
- "signingName": "account",
"signingRegion": "cn-northwest-1"
}
]
},
- "url": "https://account.cn-northwest-1.amazonaws.com.cn"
+ "url": "https://account-fips.cn-northwest-1.api.amazonwebservices.com.cn"
}
},
"params": {
- "Region": "aws-cn-global",
- "UseFIPS": false,
- "UseDualStack": false
- }
- },
- {
- "documentation": "For region cn-north-1 with FIPS enabled and DualStack enabled",
- "expect": {
- "endpoint": {
- "url": "https://account-fips.cn-north-1.api.amazonwebservices.com.cn"
- }
- },
- "params": {
- "Region": "cn-north-1",
+ "Region": "cn-northwest-1",
"UseFIPS": true,
"UseDualStack": true
}
},
{
- "documentation": "For region cn-north-1 with FIPS enabled and DualStack disabled",
+ "documentation": "For region cn-northwest-1 with FIPS enabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://account-fips.cn-north-1.amazonaws.com.cn"
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "cn-northwest-1"
+ }
+ ]
+ },
+ "url": "https://account-fips.cn-northwest-1.amazonaws.com.cn"
}
},
"params": {
- "Region": "cn-north-1",
+ "Region": "cn-northwest-1",
"UseFIPS": true,
"UseDualStack": false
}
},
{
- "documentation": "For region cn-north-1 with FIPS disabled and DualStack enabled",
+ "documentation": "For region cn-northwest-1 with FIPS disabled and DualStack enabled",
"expect": {
"endpoint": {
- "url": "https://account.cn-north-1.api.amazonwebservices.com.cn"
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "cn-northwest-1"
+ }
+ ]
+ },
+ "url": "https://account.cn-northwest-1.api.amazonwebservices.com.cn"
}
},
"params": {
- "Region": "cn-north-1",
+ "Region": "cn-northwest-1",
"UseFIPS": false,
"UseDualStack": true
}
},
{
- "documentation": "For region cn-north-1 with FIPS disabled and DualStack disabled",
+ "documentation": "For region cn-northwest-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
"properties": {
"authSchemes": [
{
"name": "sigv4",
- "signingName": "account",
"signingRegion": "cn-northwest-1"
}
]
@@ -714,59 +708,91 @@
}
},
"params": {
- "Region": "cn-north-1",
+ "Region": "cn-northwest-1",
"UseFIPS": false,
"UseDualStack": false
}
},
{
- "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack enabled",
+ "documentation": "For region us-gov-west-1 with FIPS enabled and DualStack enabled",
"expect": {
"endpoint": {
- "url": "https://account-fips.us-gov-east-1.api.aws"
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-gov-west-1"
+ }
+ ]
+ },
+ "url": "https://account-fips.us-gov-west-1.api.aws"
}
},
"params": {
- "Region": "us-gov-east-1",
+ "Region": "us-gov-west-1",
"UseFIPS": true,
"UseDualStack": true
}
},
{
- "documentation": "For region us-gov-east-1 with FIPS enabled and DualStack disabled",
+ "documentation": "For region us-gov-west-1 with FIPS enabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://account-fips.us-gov-east-1.amazonaws.com"
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-gov-west-1"
+ }
+ ]
+ },
+ "url": "https://account-fips.us-gov-west-1.amazonaws.com"
}
},
"params": {
- "Region": "us-gov-east-1",
+ "Region": "us-gov-west-1",
"UseFIPS": true,
"UseDualStack": false
}
},
{
- "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack enabled",
+ "documentation": "For region us-gov-west-1 with FIPS disabled and DualStack enabled",
"expect": {
"endpoint": {
- "url": "https://account.us-gov-east-1.api.aws"
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-gov-west-1"
+ }
+ ]
+ },
+ "url": "https://account.us-gov-west-1.api.aws"
}
},
"params": {
- "Region": "us-gov-east-1",
+ "Region": "us-gov-west-1",
"UseFIPS": false,
"UseDualStack": true
}
},
{
- "documentation": "For region us-gov-east-1 with FIPS disabled and DualStack disabled",
+ "documentation": "For region us-gov-west-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://account.us-gov-east-1.amazonaws.com"
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-gov-west-1"
+ }
+ ]
+ },
+ "url": "https://account.us-gov-west-1.amazonaws.com"
}
},
"params": {
- "Region": "us-gov-east-1",
+ "Region": "us-gov-west-1",
"UseFIPS": false,
"UseDualStack": false
}
@@ -786,6 +812,14 @@
"documentation": "For region us-iso-east-1 with FIPS enabled and DualStack disabled",
"expect": {
"endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-iso-east-1"
+ }
+ ]
+ },
"url": "https://account-fips.us-iso-east-1.c2s.ic.gov"
}
},
@@ -810,6 +844,14 @@
"documentation": "For region us-iso-east-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-iso-east-1"
+ }
+ ]
+ },
"url": "https://account.us-iso-east-1.c2s.ic.gov"
}
},
@@ -834,6 +876,14 @@
"documentation": "For region us-isob-east-1 with FIPS enabled and DualStack disabled",
"expect": {
"endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-isob-east-1"
+ }
+ ]
+ },
"url": "https://account-fips.us-isob-east-1.sc2s.sgov.gov"
}
},
@@ -858,6 +908,14 @@
"documentation": "For region us-isob-east-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-isob-east-1"
+ }
+ ]
+ },
"url": "https://account.us-isob-east-1.sc2s.sgov.gov"
}
},
@@ -868,54 +926,131 @@
}
},
{
- "documentation": "For custom endpoint with region set and fips disabled and dualstack disabled",
+ "documentation": "For region eu-isoe-west-1 with FIPS enabled and DualStack enabled",
+ "expect": {
+ "error": "FIPS and DualStack are enabled, but this partition does not support one or both"
+ },
+ "params": {
+ "Region": "eu-isoe-west-1",
+ "UseFIPS": true,
+ "UseDualStack": true
+ }
+ },
+ {
+ "documentation": "For region eu-isoe-west-1 with FIPS enabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://example.com"
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "eu-isoe-west-1"
+ }
+ ]
+ },
+ "url": "https://account-fips.eu-isoe-west-1.cloud.adc-e.uk"
}
},
"params": {
- "Region": "us-east-1",
+ "Region": "eu-isoe-west-1",
+ "UseFIPS": true,
+ "UseDualStack": false
+ }
+ },
+ {
+ "documentation": "For region eu-isoe-west-1 with FIPS disabled and DualStack enabled",
+ "expect": {
+ "error": "DualStack is enabled but this partition does not support DualStack"
+ },
+ "params": {
+ "Region": "eu-isoe-west-1",
"UseFIPS": false,
- "UseDualStack": false,
- "Endpoint": "https://example.com"
+ "UseDualStack": true
}
},
{
- "documentation": "For custom endpoint with region not set and fips disabled and dualstack disabled",
+ "documentation": "For region eu-isoe-west-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://example.com"
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "eu-isoe-west-1"
+ }
+ ]
+ },
+ "url": "https://account.eu-isoe-west-1.cloud.adc-e.uk"
}
},
"params": {
+ "Region": "eu-isoe-west-1",
"UseFIPS": false,
- "UseDualStack": false,
- "Endpoint": "https://example.com"
+ "UseDualStack": false
}
},
{
- "documentation": "For custom endpoint with fips enabled and dualstack disabled",
+ "documentation": "For region us-isof-south-1 with FIPS enabled and DualStack enabled",
"expect": {
- "error": "Invalid Configuration: FIPS and custom endpoint are not supported"
+ "error": "FIPS and DualStack are enabled, but this partition does not support one or both"
},
"params": {
- "Region": "us-east-1",
+ "Region": "us-isof-south-1",
"UseFIPS": true,
- "UseDualStack": false,
- "Endpoint": "https://example.com"
+ "UseDualStack": true
}
},
{
- "documentation": "For custom endpoint with fips disabled and dualstack enabled",
+ "documentation": "For region us-isof-south-1 with FIPS enabled and DualStack disabled",
"expect": {
- "error": "Invalid Configuration: Dualstack and custom endpoint are not supported"
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-isof-south-1"
+ }
+ ]
+ },
+ "url": "https://account-fips.us-isof-south-1.csp.hci.ic.gov"
+ }
},
"params": {
- "Region": "us-east-1",
+ "Region": "us-isof-south-1",
+ "UseFIPS": true,
+ "UseDualStack": false
+ }
+ },
+ {
+ "documentation": "For region us-isof-south-1 with FIPS disabled and DualStack enabled",
+ "expect": {
+ "error": "DualStack is enabled but this partition does not support DualStack"
+ },
+ "params": {
+ "Region": "us-isof-south-1",
+ "UseFIPS": false,
+ "UseDualStack": true
+ }
+ },
+ {
+ "documentation": "For region us-isof-south-1 with FIPS disabled and DualStack disabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-isof-south-1"
+ }
+ ]
+ },
+ "url": "https://account.us-isof-south-1.csp.hci.ic.gov"
+ }
+ },
+ "params": {
+ "Region": "us-isof-south-1",
"UseFIPS": false,
- "UseDualStack": true,
- "Endpoint": "https://example.com"
+ "UseDualStack": false
}
},
{
diff --git a/codegen/sdk-codegen/aws-models/amplify.json b/codegen/sdk-codegen/aws-models/amplify.json
index 45ba331ec0a..61be8fba947 100644
--- a/codegen/sdk-codegen/aws-models/amplify.json
+++ b/codegen/sdk-codegen/aws-models/amplify.json
@@ -1100,14 +1100,14 @@
"createTime": {
"target": "com.amazonaws.amplify#CreateTime",
"traits": {
- "smithy.api#documentation": "Creates a date and time for the Amplify app.
",
+ "smithy.api#documentation": "A timestamp of when Amplify created the application.
",
"smithy.api#required": {}
}
},
"updateTime": {
"target": "com.amazonaws.amplify#UpdateTime",
"traits": {
- "smithy.api#documentation": "Updates the date and time for the Amplify app.
",
+ "smithy.api#documentation": "A timestamp of when Amplify updated the application.
",
"smithy.api#required": {}
}
},
@@ -1210,6 +1210,18 @@
"traits": {
"smithy.api#documentation": "The cache configuration for the Amplify app. If you don't specify the\n cache configuration type
, Amplify uses the default\n AMPLIFY_MANAGED
setting.
"
}
+ },
+ "webhookCreateTime": {
+ "target": "com.amazonaws.amplify#webhookCreateTime",
+ "traits": {
+ "smithy.api#documentation": "A timestamp of when Amplify created the webhook in your Git repository.
"
+ }
+ },
+ "wafConfiguration": {
+ "target": "com.amazonaws.amplify#WafConfiguration",
+ "traits": {
+ "smithy.api#documentation": "Describes the Firewall configuration for the Amplify app. Firewall support enables you to protect your hosted applications with a direct integration\n with WAF.
"
+ }
}
},
"traits": {
@@ -1587,14 +1599,14 @@
"createTime": {
"target": "com.amazonaws.amplify#CreateTime",
"traits": {
- "smithy.api#documentation": " The creation date and time for a branch that is part of an Amplify app.
",
+ "smithy.api#documentation": "A timestamp of when Amplify created the branch.
",
"smithy.api#required": {}
}
},
"updateTime": {
"target": "com.amazonaws.amplify#UpdateTime",
"traits": {
- "smithy.api#documentation": " The last updated date and time for a branch that is part of an Amplify app.
",
+ "smithy.api#documentation": "A timestamp for the last updated time for a branch.
",
"smithy.api#required": {}
}
},
@@ -4169,6 +4181,12 @@
"com.amazonaws.amplify#JobStatus": {
"type": "enum",
"members": {
+ "CREATED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "CREATED"
+ }
+ },
"PENDING": {
"target": "smithy.api#Unit",
"traits": {
@@ -4253,7 +4271,7 @@
"commitTime": {
"target": "com.amazonaws.amplify#CommitTime",
"traits": {
- "smithy.api#documentation": " The commit date and time for the job.
",
+ "smithy.api#documentation": "The commit date and time for the job.
",
"smithy.api#required": {}
}
},
@@ -6686,6 +6704,77 @@
"com.amazonaws.amplify#Verified": {
"type": "boolean"
},
+ "com.amazonaws.amplify#WafConfiguration": {
+ "type": "structure",
+ "members": {
+ "webAclArn": {
+ "target": "com.amazonaws.amplify#WebAclArn",
+ "traits": {
+ "smithy.api#documentation": "The Amazon Resource Name (ARN) for the web ACL associated with an Amplify app.
"
+ }
+ },
+ "wafStatus": {
+ "target": "com.amazonaws.amplify#WafStatus",
+ "traits": {
+ "smithy.api#documentation": "The status of the process to associate or disassociate a web ACL to an Amplify app.
"
+ }
+ },
+ "statusReason": {
+ "target": "com.amazonaws.amplify#StatusReason",
+ "traits": {
+ "smithy.api#documentation": "The reason for the current status of the Firewall configuration.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Describes the Firewall configuration for a hosted Amplify application.\n Firewall support enables you to protect your web applications with a direct integration\n with WAF. For more information about using WAF protections for an Amplify application, see\n Firewall support for hosted sites in the Amplify\n User Guide.
"
+ }
+ },
+ "com.amazonaws.amplify#WafStatus": {
+ "type": "enum",
+ "members": {
+ "ASSOCIATING": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "ASSOCIATING"
+ }
+ },
+ "ASSOCIATION_FAILED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "ASSOCIATION_FAILED"
+ }
+ },
+ "ASSOCIATION_SUCCESS": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "ASSOCIATION_SUCCESS"
+ }
+ },
+ "DISASSOCIATING": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "DISASSOCIATING"
+ }
+ },
+ "DISASSOCIATION_FAILED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "DISASSOCIATION_FAILED"
+ }
+ }
+ }
+ },
+ "com.amazonaws.amplify#WebAclArn": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 512
+ },
+ "smithy.api#pattern": "^arn:aws:wafv2:"
+ }
+ },
"com.amazonaws.amplify#Webhook": {
"type": "structure",
"members": {
@@ -6727,14 +6816,14 @@
"createTime": {
"target": "com.amazonaws.amplify#CreateTime",
"traits": {
- "smithy.api#documentation": "The create date and time for a webhook.
",
+ "smithy.api#documentation": "A timestamp of when Amplify created the webhook in your Git repository.
",
"smithy.api#required": {}
}
},
"updateTime": {
"target": "com.amazonaws.amplify#UpdateTime",
"traits": {
- "smithy.api#documentation": "Updates the date and time for a webhook.
",
+ "smithy.api#documentation": "A timestamp of when Amplify updated the webhook in your Git repository.
",
"smithy.api#required": {}
}
}
@@ -6776,6 +6865,9 @@
"member": {
"target": "com.amazonaws.amplify#Webhook"
}
+ },
+ "com.amazonaws.amplify#webhookCreateTime": {
+ "type": "timestamp"
}
}
}
diff --git a/codegen/sdk-codegen/aws-models/backup.json b/codegen/sdk-codegen/aws-models/backup.json
index b99cf3683b7..0b959d5a089 100644
--- a/codegen/sdk-codegen/aws-models/backup.json
+++ b/codegen/sdk-codegen/aws-models/backup.json
@@ -743,6 +743,12 @@
"traits": {
"smithy.api#documentation": "The timezone in which the schedule expression is set. By default, \n ScheduleExpressions are in UTC. You can modify this to a specified timezone.
"
}
+ },
+ "IndexActions": {
+ "target": "com.amazonaws.backup#IndexActions",
+ "traits": {
+ "smithy.api#documentation": "IndexActions is an array you use to specify how backup data should \n be indexed.
\n eEach BackupRule can have 0 or 1 IndexAction, as each backup can have up \n to one index associated with it.
\n Within the array is ResourceType. Only one will be accepted for each BackupRule.
"
+ }
}
},
"traits": {
@@ -813,6 +819,12 @@
"traits": {
"smithy.api#documentation": "The timezone in which the schedule expression is set. By default, \n ScheduleExpressions are in UTC. You can modify this to a specified timezone.
"
}
+ },
+ "IndexActions": {
+ "target": "com.amazonaws.backup#IndexActions",
+ "traits": {
+ "smithy.api#documentation": "There can up to one IndexAction in each BackupRule, as each backup \n can have 0 or 1 backup index associated with it.
\n Within the array is ResourceTypes. Only 1 resource type will \n be accepted for each BackupRule. Valid values:
\n "
+ }
}
},
"traits": {
@@ -2850,6 +2862,9 @@
{
"target": "com.amazonaws.backup#GetLegalHold"
},
+ {
+ "target": "com.amazonaws.backup#GetRecoveryPointIndexDetails"
+ },
{
"target": "com.amazonaws.backup#GetRecoveryPointRestoreMetadata"
},
@@ -2898,6 +2913,9 @@
{
"target": "com.amazonaws.backup#ListFrameworks"
},
+ {
+ "target": "com.amazonaws.backup#ListIndexedRecoveryPoints"
+ },
{
"target": "com.amazonaws.backup#ListLegalHolds"
},
@@ -2982,6 +3000,9 @@
{
"target": "com.amazonaws.backup#UpdateGlobalSettings"
},
+ {
+ "target": "com.amazonaws.backup#UpdateRecoveryPointIndexSettings"
+ },
{
"target": "com.amazonaws.backup#UpdateRecoveryPointLifecycle"
},
@@ -5475,6 +5496,18 @@
"traits": {
"smithy.api#documentation": "The type of vault in which the described recovery point is stored.
"
}
+ },
+ "IndexStatus": {
+ "target": "com.amazonaws.backup#IndexStatus",
+ "traits": {
+ "smithy.api#documentation": "This is the current status for the backup index associated with the specified recovery\n point.
\n Statuses are: PENDING
| ACTIVE
| FAILED
|\n DELETING
\n
\n A recovery point with an index that has the status of ACTIVE
can be\n included in a search.
"
+ }
+ },
+ "IndexStatusMessage": {
+ "target": "com.amazonaws.backup#string",
+ "traits": {
+ "smithy.api#documentation": "A string in the form of a detailed message explaining the status of a backup index\n associated with the recovery point.
"
+ }
}
},
"traits": {
@@ -6717,6 +6750,124 @@
"smithy.api#output": {}
}
},
+ "com.amazonaws.backup#GetRecoveryPointIndexDetails": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backup#GetRecoveryPointIndexDetailsInput"
+ },
+ "output": {
+ "target": "com.amazonaws.backup#GetRecoveryPointIndexDetailsOutput"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backup#InvalidParameterValueException"
+ },
+ {
+ "target": "com.amazonaws.backup#MissingParameterValueException"
+ },
+ {
+ "target": "com.amazonaws.backup#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.backup#ServiceUnavailableException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "This operation returns the metadata and details specific to \n the backup index associated with the specified recovery point.
",
+ "smithy.api#http": {
+ "method": "GET",
+ "uri": "/backup-vaults/{BackupVaultName}/recovery-points/{RecoveryPointArn}/index",
+ "code": 200
+ },
+ "smithy.api#idempotent": {}
+ }
+ },
+ "com.amazonaws.backup#GetRecoveryPointIndexDetailsInput": {
+ "type": "structure",
+ "members": {
+ "BackupVaultName": {
+ "target": "com.amazonaws.backup#BackupVaultName",
+ "traits": {
+ "smithy.api#documentation": "The name of a logical container where backups are stored. Backup vaults are identified\n by names that are unique to the account used to create them and the Region where they are\n created.
\n Accepted characters include lowercase letters, numbers, and hyphens.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "RecoveryPointArn": {
+ "target": "com.amazonaws.backup#ARN",
+ "traits": {
+ "smithy.api#documentation": "An ARN that uniquely identifies a recovery point; for example,\n arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backup#GetRecoveryPointIndexDetailsOutput": {
+ "type": "structure",
+ "members": {
+ "RecoveryPointArn": {
+ "target": "com.amazonaws.backup#ARN",
+ "traits": {
+ "smithy.api#documentation": "An ARN that uniquely identifies a recovery point; for example,\n arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
.
"
+ }
+ },
+ "BackupVaultArn": {
+ "target": "com.amazonaws.backup#ARN",
+ "traits": {
+ "smithy.api#documentation": "An ARN that uniquely identifies the backup vault where the recovery \n point index is stored.
\n For example,\n arn:aws:backup:us-east-1:123456789012:backup-vault:aBackupVault
.
"
+ }
+ },
+ "SourceResourceArn": {
+ "target": "com.amazonaws.backup#ARN",
+ "traits": {
+ "smithy.api#documentation": "A string of the Amazon Resource Name (ARN) that uniquely identifies \n the source resource.
"
+ }
+ },
+ "IndexCreationDate": {
+ "target": "com.amazonaws.backup#timestamp",
+ "traits": {
+ "smithy.api#documentation": "The date and time that a backup index was created, in Unix format and Coordinated\n Universal Time (UTC). The value of CreationDate
is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
"
+ }
+ },
+ "IndexDeletionDate": {
+ "target": "com.amazonaws.backup#timestamp",
+ "traits": {
+ "smithy.api#documentation": "The date and time that a backup index was deleted, in Unix format and Coordinated\n Universal Time (UTC). The value of CreationDate
is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
"
+ }
+ },
+ "IndexCompletionDate": {
+ "target": "com.amazonaws.backup#timestamp",
+ "traits": {
+ "smithy.api#documentation": "The date and time that a backup index finished creation, in Unix format and Coordinated\n Universal Time (UTC). The value of CreationDate
is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
"
+ }
+ },
+ "IndexStatus": {
+ "target": "com.amazonaws.backup#IndexStatus",
+ "traits": {
+ "smithy.api#documentation": "This is the current status for the backup index associated \n with the specified recovery point.
\n Statuses are: PENDING
| ACTIVE
| FAILED
| DELETING
\n
\n A recovery point with an index that has the status of ACTIVE
\n can be included in a search.
"
+ }
+ },
+ "IndexStatusMessage": {
+ "target": "com.amazonaws.backup#string",
+ "traits": {
+ "smithy.api#documentation": "A detailed message explaining the status of a backup index associated \n with the recovery point.
"
+ }
+ },
+ "TotalItemsIndexed": {
+ "target": "com.amazonaws.backup#Long",
+ "traits": {
+ "smithy.api#documentation": "Count of items within the backup index associated with the \n recovery point.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.backup#GetRecoveryPointRestoreMetadata": {
"type": "operation",
"input": {
@@ -7133,6 +7284,140 @@
"com.amazonaws.backup#IAMRoleArn": {
"type": "string"
},
+ "com.amazonaws.backup#Index": {
+ "type": "enum",
+ "members": {
+ "ENABLED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "ENABLED"
+ }
+ },
+ "DISABLED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "DISABLED"
+ }
+ }
+ }
+ },
+ "com.amazonaws.backup#IndexAction": {
+ "type": "structure",
+ "members": {
+ "ResourceTypes": {
+ "target": "com.amazonaws.backup#ResourceTypes",
+ "traits": {
+ "smithy.api#documentation": "0 or 1 index action will be accepted for each BackupRule.
\n Valid values:
\n "
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This is an optional array within a BackupRule.
\n IndexAction consists of one ResourceTypes.
"
+ }
+ },
+ "com.amazonaws.backup#IndexActions": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.backup#IndexAction"
+ }
+ },
+ "com.amazonaws.backup#IndexStatus": {
+ "type": "enum",
+ "members": {
+ "PENDING": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "PENDING"
+ }
+ },
+ "ACTIVE": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "ACTIVE"
+ }
+ },
+ "FAILED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "FAILED"
+ }
+ },
+ "DELETING": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "DELETING"
+ }
+ }
+ }
+ },
+ "com.amazonaws.backup#IndexedRecoveryPoint": {
+ "type": "structure",
+ "members": {
+ "RecoveryPointArn": {
+ "target": "com.amazonaws.backup#ARN",
+ "traits": {
+ "smithy.api#documentation": "An ARN that uniquely identifies a recovery point; for example,\n arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
\n
"
+ }
+ },
+ "SourceResourceArn": {
+ "target": "com.amazonaws.backup#ARN",
+ "traits": {
+ "smithy.api#documentation": "A string of the Amazon Resource Name (ARN) that uniquely identifies \n the source resource.
"
+ }
+ },
+ "IamRoleArn": {
+ "target": "com.amazonaws.backup#ARN",
+ "traits": {
+ "smithy.api#documentation": "This specifies the IAM role ARN used for this operation.
\n For example, arn:aws:iam::123456789012:role/S3Access
"
+ }
+ },
+ "BackupCreationDate": {
+ "target": "com.amazonaws.backup#timestamp",
+ "traits": {
+ "smithy.api#documentation": "The date and time that a backup was created, in Unix format and Coordinated\n Universal Time (UTC). The value of CreationDate
is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
"
+ }
+ },
+ "ResourceType": {
+ "target": "com.amazonaws.backup#ResourceType",
+ "traits": {
+ "smithy.api#documentation": "The resource type of the indexed recovery point.
\n "
+ }
+ },
+ "IndexCreationDate": {
+ "target": "com.amazonaws.backup#timestamp",
+ "traits": {
+ "smithy.api#documentation": "The date and time that a backup index was created, in Unix format and Coordinated\n Universal Time (UTC). The value of CreationDate
is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
"
+ }
+ },
+ "IndexStatus": {
+ "target": "com.amazonaws.backup#IndexStatus",
+ "traits": {
+ "smithy.api#documentation": "This is the current status for the backup index associated \n with the specified recovery point.
\n Statuses are: PENDING
| ACTIVE
| FAILED
| DELETING
\n
\n A recovery point with an index that has the status of ACTIVE
\n can be included in a search.
"
+ }
+ },
+ "IndexStatusMessage": {
+ "target": "com.amazonaws.backup#string",
+ "traits": {
+ "smithy.api#documentation": "A string in the form of a detailed message explaining the status of a backup index associated \n with the recovery point.
"
+ }
+ },
+ "BackupVaultArn": {
+ "target": "com.amazonaws.backup#ARN",
+ "traits": {
+ "smithy.api#documentation": "An ARN that uniquely identifies the backup vault where the recovery \n point index is stored.
\n For example,\n arn:aws:backup:us-east-1:123456789012:backup-vault:aBackupVault
.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This is a recovery point that has an associated backup index.
\n Only recovery points with a backup index can be \n included in a search.
"
+ }
+ },
+ "com.amazonaws.backup#IndexedRecoveryPointList": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.backup#IndexedRecoveryPoint"
+ }
+ },
"com.amazonaws.backup#InvalidParameterValueException": {
"type": "structure",
"members": {
@@ -8422,6 +8707,118 @@
"smithy.api#output": {}
}
},
+ "com.amazonaws.backup#ListIndexedRecoveryPoints": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backup#ListIndexedRecoveryPointsInput"
+ },
+ "output": {
+ "target": "com.amazonaws.backup#ListIndexedRecoveryPointsOutput"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backup#InvalidParameterValueException"
+ },
+ {
+ "target": "com.amazonaws.backup#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.backup#ServiceUnavailableException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "This operation returns a list of recovery points that have an \n associated index, belonging to the specified account.
\n Optional parameters you can include are: MaxResults; \n NextToken; SourceResourceArns; CreatedBefore; CreatedAfter; \n and ResourceType.
",
+ "smithy.api#http": {
+ "method": "GET",
+ "uri": "/indexes/recovery-point",
+ "code": 200
+ },
+ "smithy.api#idempotent": {},
+ "smithy.api#paginated": {
+ "inputToken": "NextToken",
+ "outputToken": "NextToken",
+ "items": "IndexedRecoveryPoints",
+ "pageSize": "MaxResults"
+ }
+ }
+ },
+ "com.amazonaws.backup#ListIndexedRecoveryPointsInput": {
+ "type": "structure",
+ "members": {
+ "NextToken": {
+ "target": "com.amazonaws.backup#string",
+ "traits": {
+ "smithy.api#documentation": "The next item following a partial list of returned recovery points.
\n For example, if a request\n is made to return MaxResults
number of indexed recovery points, NextToken
\n allows you to return more items in your list starting at the location pointed to by the\n next token.
",
+ "smithy.api#httpQuery": "nextToken"
+ }
+ },
+ "MaxResults": {
+ "target": "com.amazonaws.backup#MaxResults",
+ "traits": {
+ "smithy.api#documentation": "The maximum number of resource list items to be returned.
",
+ "smithy.api#httpQuery": "maxResults"
+ }
+ },
+ "SourceResourceArn": {
+ "target": "com.amazonaws.backup#ARN",
+ "traits": {
+ "smithy.api#documentation": "A string of the Amazon Resource Name (ARN) that uniquely identifies \n the source resource.
",
+ "smithy.api#httpQuery": "sourceResourceArn"
+ }
+ },
+ "CreatedBefore": {
+ "target": "com.amazonaws.backup#timestamp",
+ "traits": {
+ "smithy.api#documentation": "Returns only indexed recovery points that were created before the \n specified date.
",
+ "smithy.api#httpQuery": "createdBefore"
+ }
+ },
+ "CreatedAfter": {
+ "target": "com.amazonaws.backup#timestamp",
+ "traits": {
+ "smithy.api#documentation": "Returns only indexed recovery points that were created after the \n specified date.
",
+ "smithy.api#httpQuery": "createdAfter"
+ }
+ },
+ "ResourceType": {
+ "target": "com.amazonaws.backup#ResourceType",
+ "traits": {
+ "smithy.api#documentation": "Returns a list of indexed recovery points for the specified \n resource type(s).
\n Accepted values include:
\n ",
+ "smithy.api#httpQuery": "resourceType"
+ }
+ },
+ "IndexStatus": {
+ "target": "com.amazonaws.backup#IndexStatus",
+ "traits": {
+ "smithy.api#documentation": "Include this parameter to filter the returned list by \n the indicated statuses.
\n Accepted values: PENDING
| ACTIVE
| FAILED
| DELETING
\n
\n A recovery point with an index that has the status of ACTIVE
\n can be included in a search.
",
+ "smithy.api#httpQuery": "indexStatus"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backup#ListIndexedRecoveryPointsOutput": {
+ "type": "structure",
+ "members": {
+ "IndexedRecoveryPoints": {
+ "target": "com.amazonaws.backup#IndexedRecoveryPointList",
+ "traits": {
+ "smithy.api#documentation": "This is a list of recovery points that have an \n associated index, belonging to the specified account.
"
+ }
+ },
+ "NextToken": {
+ "target": "com.amazonaws.backup#string",
+ "traits": {
+ "smithy.api#documentation": "The next item following a partial list of returned recovery points.
\n For example, if a request\n is made to return MaxResults
number of indexed recovery points, NextToken
\n allows you to return more items in your list starting at the location pointed to by the\n next token.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.backup#ListLegalHolds": {
"type": "operation",
"input": {
@@ -10316,6 +10713,18 @@
"traits": {
"smithy.api#documentation": "The type of vault in which the described recovery point is stored.
"
}
+ },
+ "IndexStatus": {
+ "target": "com.amazonaws.backup#IndexStatus",
+ "traits": {
+ "smithy.api#documentation": "This is the current status for the backup index associated \n with the specified recovery point.
\n Statuses are: PENDING
| ACTIVE
| FAILED
|\n DELETING
\n
\n A recovery point with an index that has the status of ACTIVE
\n can be included in a search.
"
+ }
+ },
+ "IndexStatusMessage": {
+ "target": "com.amazonaws.backup#string",
+ "traits": {
+ "smithy.api#documentation": "A string in the form of a detailed message explaining the status of a backup index associated \n with the recovery point.
"
+ }
}
},
"traits": {
@@ -10397,6 +10806,18 @@
"traits": {
"smithy.api#documentation": "The type of vault in which the described recovery point is \n stored.
"
}
+ },
+ "IndexStatus": {
+ "target": "com.amazonaws.backup#IndexStatus",
+ "traits": {
+ "smithy.api#documentation": "This is the current status for the backup index associated \n with the specified recovery point.
\n Statuses are: PENDING
| ACTIVE
| FAILED
| DELETING
\n
\n A recovery point with an index that has the status of ACTIVE
\n can be included in a search.
"
+ }
+ },
+ "IndexStatusMessage": {
+ "target": "com.amazonaws.backup#string",
+ "traits": {
+ "smithy.api#documentation": "A string in the form of a detailed message explaining the status of a backup index\n associated with the recovery point.
"
+ }
}
},
"traits": {
@@ -11851,6 +12272,12 @@
"traits": {
"smithy.api#documentation": "The backup option for a selected resource. This option is only available for\n Windows Volume Shadow Copy Service (VSS) backup jobs.
\n Valid values: Set to \"WindowsVSS\":\"enabled\"
to enable the\n WindowsVSS
backup option and create a Windows VSS backup. Set to\n \"WindowsVSS\"\"disabled\"
to create a regular backup. The\n WindowsVSS
option is not enabled by default.
"
}
+ },
+ "Index": {
+ "target": "com.amazonaws.backup#Index",
+ "traits": {
+ "smithy.api#documentation": "Include this parameter to enable index creation if your backup \n job has a resource type that supports backup indexes.
\n Resource types that support backup indexes include:
\n \n Index can have 1 of 2 possible values, either ENABLED
or \n DISABLED
.
\n To create a backup index for an eligible ACTIVE
recovery point \n that does not yet have a backup index, set value to ENABLED
.
\n To delete a backup index, set value to DISABLED
.
"
+ }
}
},
"traits": {
@@ -12621,6 +13048,110 @@
"smithy.api#input": {}
}
},
+ "com.amazonaws.backup#UpdateRecoveryPointIndexSettings": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backup#UpdateRecoveryPointIndexSettingsInput"
+ },
+ "output": {
+ "target": "com.amazonaws.backup#UpdateRecoveryPointIndexSettingsOutput"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backup#InvalidParameterValueException"
+ },
+ {
+ "target": "com.amazonaws.backup#InvalidRequestException"
+ },
+ {
+ "target": "com.amazonaws.backup#MissingParameterValueException"
+ },
+ {
+ "target": "com.amazonaws.backup#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.backup#ServiceUnavailableException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "This operation updates the settings of a recovery point index.
\n Required: BackupVaultName, RecoveryPointArn, and IAMRoleArn
",
+ "smithy.api#http": {
+ "method": "POST",
+ "uri": "/backup-vaults/{BackupVaultName}/recovery-points/{RecoveryPointArn}/index",
+ "code": 200
+ },
+ "smithy.api#idempotent": {}
+ }
+ },
+ "com.amazonaws.backup#UpdateRecoveryPointIndexSettingsInput": {
+ "type": "structure",
+ "members": {
+ "BackupVaultName": {
+ "target": "com.amazonaws.backup#BackupVaultName",
+ "traits": {
+ "smithy.api#documentation": "The name of a logical container where backups are stored. Backup vaults are identified\n by names that are unique to the account used to create them and the Region where they are\n created.
\n Accepted characters include lowercase letters, numbers, and hyphens.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "RecoveryPointArn": {
+ "target": "com.amazonaws.backup#ARN",
+ "traits": {
+ "smithy.api#documentation": "An ARN that uniquely identifies a recovery point; for example,\n arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "IamRoleArn": {
+ "target": "com.amazonaws.backup#IAMRoleArn",
+ "traits": {
+ "smithy.api#documentation": "This specifies the IAM role ARN used for this operation.
\n For example, arn:aws:iam::123456789012:role/S3Access
"
+ }
+ },
+ "Index": {
+ "target": "com.amazonaws.backup#Index",
+ "traits": {
+ "smithy.api#documentation": "Index can have 1 of 2 possible values, either ENABLED
or \n DISABLED
.
\n To create a backup index for an eligible ACTIVE
recovery point \n that does not yet have a backup index, set value to ENABLED
.
\n To delete a backup index, set value to DISABLED
.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backup#UpdateRecoveryPointIndexSettingsOutput": {
+ "type": "structure",
+ "members": {
+ "BackupVaultName": {
+ "target": "com.amazonaws.backup#BackupVaultName",
+ "traits": {
+ "smithy.api#documentation": "The name of a logical container where backups are stored. Backup vaults are identified\n by names that are unique to the account used to create them and the Region where they are\n created.
"
+ }
+ },
+ "RecoveryPointArn": {
+ "target": "com.amazonaws.backup#ARN",
+ "traits": {
+ "smithy.api#documentation": "An ARN that uniquely identifies a recovery point; for example,\n arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
.
"
+ }
+ },
+ "IndexStatus": {
+ "target": "com.amazonaws.backup#IndexStatus",
+ "traits": {
+ "smithy.api#documentation": "This is the current status for the backup index associated \n with the specified recovery point.
\n Statuses are: PENDING
| ACTIVE
| FAILED
| DELETING
\n
\n A recovery point with an index that has the status of ACTIVE
\n can be included in a search.
"
+ }
+ },
+ "Index": {
+ "target": "com.amazonaws.backup#Index",
+ "traits": {
+ "smithy.api#documentation": "Index can have 1 of 2 possible values, either ENABLED
or\n DISABLED
.
\n A value of ENABLED
means a backup index for an eligible ACTIVE
\n recovery point has been created.
\n A value of DISABLED
means a backup index was deleted.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.backup#UpdateRecoveryPointLifecycle": {
"type": "operation",
"input": {
diff --git a/codegen/sdk-codegen/aws-models/backupsearch.json b/codegen/sdk-codegen/aws-models/backupsearch.json
new file mode 100644
index 00000000000..a9bd5537b7e
--- /dev/null
+++ b/codegen/sdk-codegen/aws-models/backupsearch.json
@@ -0,0 +1,2820 @@
+{
+ "smithy": "2.0",
+ "shapes": {
+ "com.amazonaws.backupsearch#AccessDeniedException": {
+ "type": "structure",
+ "members": {
+ "message": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "User does not have sufficient access to perform this action.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "You do not have sufficient access to perform this action.
",
+ "smithy.api#error": "client",
+ "smithy.api#httpError": 403
+ }
+ },
+ "com.amazonaws.backupsearch#BackupCreationTimeFilter": {
+ "type": "structure",
+ "members": {
+ "CreatedAfter": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "This timestamp includes recovery points only \n created after the specified time.
"
+ }
+ },
+ "CreatedBefore": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "This timestamp includes recovery points only \n created before the specified time.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This filters by recovery points within the CreatedAfter \n and CreatedBefore timestamps.
"
+ }
+ },
+ "com.amazonaws.backupsearch#ConflictException": {
+ "type": "structure",
+ "members": {
+ "message": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Updating or deleting a resource can cause an inconsistent state.
",
+ "smithy.api#required": {}
+ }
+ },
+ "resourceId": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Identifier of the resource affected.
",
+ "smithy.api#required": {}
+ }
+ },
+ "resourceType": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Type of the resource affected.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This exception occurs when a conflict with a previous successful\n operation is detected. This generally occurs when the previous \n operation did not have time to propagate to the host serving the \n current request.
\n A retry (with appropriate backoff logic) is the recommended \n response to this exception.
",
+ "smithy.api#error": "client",
+ "smithy.api#httpError": 409
+ }
+ },
+ "com.amazonaws.backupsearch#CryoBackupSearchService": {
+ "type": "service",
+ "version": "2018-05-10",
+ "operations": [
+ {
+ "target": "com.amazonaws.backupsearch#ListSearchJobBackups"
+ },
+ {
+ "target": "com.amazonaws.backupsearch#ListSearchJobResults"
+ },
+ {
+ "target": "com.amazonaws.backupsearch#ListTagsForResource"
+ },
+ {
+ "target": "com.amazonaws.backupsearch#TagResource"
+ },
+ {
+ "target": "com.amazonaws.backupsearch#UntagResource"
+ }
+ ],
+ "resources": [
+ {
+ "target": "com.amazonaws.backupsearch#SearchJob"
+ },
+ {
+ "target": "com.amazonaws.backupsearch#SearchResultExportJob"
+ }
+ ],
+ "errors": [
+ {
+ "target": "com.amazonaws.backupsearch#AccessDeniedException"
+ },
+ {
+ "target": "com.amazonaws.backupsearch#InternalServerException"
+ },
+ {
+ "target": "com.amazonaws.backupsearch#ThrottlingException"
+ },
+ {
+ "target": "com.amazonaws.backupsearch#ValidationException"
+ }
+ ],
+ "traits": {
+ "aws.api#service": {
+ "sdkId": "BackupSearch",
+ "arnNamespace": "backup-search",
+ "endpointPrefix": "backup-search",
+ "cloudTrailEventSource": "backup.amazonaws.com"
+ },
+ "aws.auth#sigv4": {
+ "name": "backup-search"
+ },
+ "aws.endpoints#dualStackOnlyEndpoints": {},
+ "aws.endpoints#standardPartitionalEndpoints": {
+ "endpointPatternType": "service_region_dnsSuffix"
+ },
+ "aws.protocols#restJson1": {},
+ "smithy.api#cors": {
+ "additionalAllowedHeaders": [
+ "*",
+ "Authorization",
+ "Date",
+ "X-Amz-Date",
+ "X-Amz-Security-Token",
+ "X-Amz-Target",
+ "content-type",
+ "x-amz-content-sha256",
+ "x-amz-user-agent",
+ "x-amzn-platform-id",
+ "x-amzn-trace-id"
+ ],
+ "additionalExposedHeaders": [
+ "x-amzn-errortype",
+ "x-amzn-requestid",
+ "x-amzn-errormessage",
+ "x-amzn-trace-id",
+ "x-amzn-requestid",
+ "x-amz-apigw-id",
+ "date"
+ ]
+ },
+ "smithy.api#documentation": "Backup Search\n Backup Search is the recovery point and item level search for Backup.
\n For additional information, see:
\n ",
+ "smithy.api#paginated": {
+ "inputToken": "NextToken",
+ "outputToken": "NextToken",
+ "pageSize": "MaxResults"
+ },
+ "smithy.api#title": "AWS Backup Search",
+ "smithy.rules#endpointRuleSet": {
+ "version": "1.0",
+ "parameters": {
+ "UseFIPS": {
+ "builtIn": "AWS::UseFIPS",
+ "required": true,
+ "default": false,
+ "documentation": "When true, send this request to the FIPS-compliant regional endpoint. If the configured endpoint does not have a FIPS compliant endpoint, dispatching the request will return an error.",
+ "type": "Boolean"
+ },
+ "Endpoint": {
+ "builtIn": "SDK::Endpoint",
+ "required": false,
+ "documentation": "Override the endpoint used to send this request",
+ "type": "String"
+ },
+ "Region": {
+ "builtIn": "AWS::Region",
+ "required": false,
+ "documentation": "The AWS region used to dispatch the request.",
+ "type": "String"
+ }
+ },
+ "rules": [
+ {
+ "conditions": [
+ {
+ "fn": "isSet",
+ "argv": [
+ {
+ "ref": "Endpoint"
+ }
+ ]
+ }
+ ],
+ "rules": [
+ {
+ "conditions": [
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ {
+ "ref": "UseFIPS"
+ },
+ true
+ ]
+ }
+ ],
+ "error": "Invalid Configuration: FIPS and custom endpoint are not supported",
+ "type": "error"
+ },
+ {
+ "conditions": [],
+ "endpoint": {
+ "url": {
+ "ref": "Endpoint"
+ },
+ "properties": {},
+ "headers": {}
+ },
+ "type": "endpoint"
+ }
+ ],
+ "type": "tree"
+ },
+ {
+ "conditions": [],
+ "rules": [
+ {
+ "conditions": [
+ {
+ "fn": "isSet",
+ "argv": [
+ {
+ "ref": "Region"
+ }
+ ]
+ }
+ ],
+ "rules": [
+ {
+ "conditions": [
+ {
+ "fn": "aws.partition",
+ "argv": [
+ {
+ "ref": "Region"
+ }
+ ],
+ "assign": "PartitionResult"
+ }
+ ],
+ "rules": [
+ {
+ "conditions": [
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ {
+ "ref": "UseFIPS"
+ },
+ true
+ ]
+ }
+ ],
+ "endpoint": {
+ "url": "https://backup-search-fips.{PartitionResult#implicitGlobalRegion}.{PartitionResult#dualStackDnsSuffix}",
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "{PartitionResult#implicitGlobalRegion}"
+ }
+ ]
+ },
+ "headers": {}
+ },
+ "type": "endpoint"
+ },
+ {
+ "conditions": [],
+ "endpoint": {
+ "url": "https://backup-search.{PartitionResult#implicitGlobalRegion}.{PartitionResult#dualStackDnsSuffix}",
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "{PartitionResult#implicitGlobalRegion}"
+ }
+ ]
+ },
+ "headers": {}
+ },
+ "type": "endpoint"
+ }
+ ],
+ "type": "tree"
+ }
+ ],
+ "type": "tree"
+ },
+ {
+ "conditions": [],
+ "error": "Invalid Configuration: Missing Region",
+ "type": "error"
+ }
+ ],
+ "type": "tree"
+ }
+ ]
+ },
+ "smithy.rules#endpointTests": {
+ "testCases": [
+ {
+ "documentation": "For custom endpoint with region not set and fips disabled",
+ "expect": {
+ "endpoint": {
+ "url": "https://example.com"
+ }
+ },
+ "params": {
+ "Endpoint": "https://example.com",
+ "UseFIPS": false
+ }
+ },
+ {
+ "documentation": "For custom endpoint with fips enabled",
+ "expect": {
+ "error": "Invalid Configuration: FIPS and custom endpoint are not supported"
+ },
+ "params": {
+ "Endpoint": "https://example.com",
+ "UseFIPS": true
+ }
+ },
+ {
+ "documentation": "For region us-east-1 with FIPS enabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-east-1"
+ }
+ ]
+ },
+ "url": "https://backup-search-fips.us-east-1.api.aws"
+ }
+ },
+ "params": {
+ "Region": "us-east-1",
+ "UseFIPS": true
+ }
+ },
+ {
+ "documentation": "For region us-east-1 with FIPS disabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-east-1"
+ }
+ ]
+ },
+ "url": "https://backup-search.us-east-1.api.aws"
+ }
+ },
+ "params": {
+ "Region": "us-east-1",
+ "UseFIPS": false
+ }
+ },
+ {
+ "documentation": "For region cn-northwest-1 with FIPS enabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "cn-northwest-1"
+ }
+ ]
+ },
+ "url": "https://backup-search-fips.cn-northwest-1.api.amazonwebservices.com.cn"
+ }
+ },
+ "params": {
+ "Region": "cn-northwest-1",
+ "UseFIPS": true
+ }
+ },
+ {
+ "documentation": "For region cn-northwest-1 with FIPS disabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "cn-northwest-1"
+ }
+ ]
+ },
+ "url": "https://backup-search.cn-northwest-1.api.amazonwebservices.com.cn"
+ }
+ },
+ "params": {
+ "Region": "cn-northwest-1",
+ "UseFIPS": false
+ }
+ },
+ {
+ "documentation": "For region us-gov-west-1 with FIPS enabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-gov-west-1"
+ }
+ ]
+ },
+ "url": "https://backup-search-fips.us-gov-west-1.api.aws"
+ }
+ },
+ "params": {
+ "Region": "us-gov-west-1",
+ "UseFIPS": true
+ }
+ },
+ {
+ "documentation": "For region us-gov-west-1 with FIPS disabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-gov-west-1"
+ }
+ ]
+ },
+ "url": "https://backup-search.us-gov-west-1.api.aws"
+ }
+ },
+ "params": {
+ "Region": "us-gov-west-1",
+ "UseFIPS": false
+ }
+ },
+ {
+ "documentation": "For region us-iso-east-1 with FIPS enabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-iso-east-1"
+ }
+ ]
+ },
+ "url": "https://backup-search-fips.us-iso-east-1.c2s.ic.gov"
+ }
+ },
+ "params": {
+ "Region": "us-iso-east-1",
+ "UseFIPS": true
+ }
+ },
+ {
+ "documentation": "For region us-iso-east-1 with FIPS disabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-iso-east-1"
+ }
+ ]
+ },
+ "url": "https://backup-search.us-iso-east-1.c2s.ic.gov"
+ }
+ },
+ "params": {
+ "Region": "us-iso-east-1",
+ "UseFIPS": false
+ }
+ },
+ {
+ "documentation": "For region us-isob-east-1 with FIPS enabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-isob-east-1"
+ }
+ ]
+ },
+ "url": "https://backup-search-fips.us-isob-east-1.sc2s.sgov.gov"
+ }
+ },
+ "params": {
+ "Region": "us-isob-east-1",
+ "UseFIPS": true
+ }
+ },
+ {
+ "documentation": "For region us-isob-east-1 with FIPS disabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-isob-east-1"
+ }
+ ]
+ },
+ "url": "https://backup-search.us-isob-east-1.sc2s.sgov.gov"
+ }
+ },
+ "params": {
+ "Region": "us-isob-east-1",
+ "UseFIPS": false
+ }
+ },
+ {
+ "documentation": "For region eu-isoe-west-1 with FIPS enabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "eu-isoe-west-1"
+ }
+ ]
+ },
+ "url": "https://backup-search-fips.eu-isoe-west-1.cloud.adc-e.uk"
+ }
+ },
+ "params": {
+ "Region": "eu-isoe-west-1",
+ "UseFIPS": true
+ }
+ },
+ {
+ "documentation": "For region eu-isoe-west-1 with FIPS disabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "eu-isoe-west-1"
+ }
+ ]
+ },
+ "url": "https://backup-search.eu-isoe-west-1.cloud.adc-e.uk"
+ }
+ },
+ "params": {
+ "Region": "eu-isoe-west-1",
+ "UseFIPS": false
+ }
+ },
+ {
+ "documentation": "For region us-isof-south-1 with FIPS enabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-isof-south-1"
+ }
+ ]
+ },
+ "url": "https://backup-search-fips.us-isof-south-1.csp.hci.ic.gov"
+ }
+ },
+ "params": {
+ "Region": "us-isof-south-1",
+ "UseFIPS": true
+ }
+ },
+ {
+ "documentation": "For region us-isof-south-1 with FIPS disabled and DualStack enabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingRegion": "us-isof-south-1"
+ }
+ ]
+ },
+ "url": "https://backup-search.us-isof-south-1.csp.hci.ic.gov"
+ }
+ },
+ "params": {
+ "Region": "us-isof-south-1",
+ "UseFIPS": false
+ }
+ },
+ {
+ "documentation": "Missing region",
+ "expect": {
+ "error": "Invalid Configuration: Missing Region"
+ }
+ }
+ ],
+ "version": "1.0"
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#CurrentSearchProgress": {
+ "type": "structure",
+ "members": {
+ "RecoveryPointsScannedCount": {
+ "target": "smithy.api#Integer",
+ "traits": {
+ "smithy.api#documentation": "This number is the sum of all backups that \n have been scanned so far during a search job.
"
+ }
+ },
+ "ItemsScannedCount": {
+ "target": "smithy.api#Long",
+ "traits": {
+ "smithy.api#documentation": "This number is the sum of all items that \n have been scanned so far during a search job.
"
+ }
+ },
+ "ItemsMatchedCount": {
+ "target": "smithy.api#Long",
+ "traits": {
+ "smithy.api#documentation": "This number is the sum of all items that match \n the item filters in a search job in progress.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This contains information results retrieved from \n a search job that may not have completed.
"
+ }
+ },
+ "com.amazonaws.backupsearch#EBSItemFilter": {
+ "type": "structure",
+ "members": {
+ "FilePaths": {
+ "target": "com.amazonaws.backupsearch#StringConditionList",
+ "traits": {
+ "smithy.api#documentation": "You can include 1 to 10 values.
\n If one file path is included, the results will \n return only items that match the file path.
\n If more than one file path is included, the \n results will return all items that match any of the \n file paths.
"
+ }
+ },
+ "Sizes": {
+ "target": "com.amazonaws.backupsearch#LongConditionList",
+ "traits": {
+ "smithy.api#documentation": "You can include 1 to 10 values.
\n If one is included, the results will \n return only items that match.
\n If more than one is included, the \n results will return all items that match any of \n the included values.
"
+ }
+ },
+ "CreationTimes": {
+ "target": "com.amazonaws.backupsearch#TimeConditionList",
+ "traits": {
+ "smithy.api#documentation": "You can include 1 to 10 values.
\n If one is included, the results will \n return only items that match.
\n If more than one is included, the \n results will return all items that match any of \n the included values.
"
+ }
+ },
+ "LastModificationTimes": {
+ "target": "com.amazonaws.backupsearch#TimeConditionList",
+ "traits": {
+ "smithy.api#documentation": "You can include 1 to 10 values.
\n If one is included, the results will \n return only items that match.
\n If more than one is included, the \n results will return all items that match any of \n the included values.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This contains arrays of objects, which may include \n CreationTimes time condition objects, FilePaths \n string objects, LastModificationTimes time \n condition objects,
"
+ }
+ },
+ "com.amazonaws.backupsearch#EBSItemFilters": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.backupsearch#EBSItemFilter"
+ },
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 10
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#EBSResultItem": {
+ "type": "structure",
+ "members": {
+ "BackupResourceArn": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "These are one or more items in the \n results that match values for the Amazon Resource \n Name (ARN) of recovery points returned in a search \n of Amazon EBS backup metadata.
"
+ }
+ },
+ "SourceResourceArn": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "These are one or more items in the \n results that match values for the Amazon Resource \n Name (ARN) of source resources returned in a search \n of Amazon EBS backup metadata.
"
+ }
+ },
+ "BackupVaultName": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The name of the backup vault.
"
+ }
+ },
+ "FileSystemIdentifier": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "These are one or more items in the \n results that match values for file systems returned \n in a search of Amazon EBS backup metadata.
"
+ }
+ },
+ "FilePath": {
+ "target": "com.amazonaws.backupsearch#FilePath",
+ "traits": {
+ "smithy.api#documentation": "These are one or more items in the \n results that match values for file paths returned \n in a search of Amazon EBS backup metadata.
"
+ }
+ },
+ "FileSize": {
+ "target": "smithy.api#Long",
+ "traits": {
+ "smithy.api#documentation": "These are one or more items in the \n results that match values for file sizes returned \n in a search of Amazon EBS backup metadata.
"
+ }
+ },
+ "CreationTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "These are one or more items in the \n results that match values for creation times returned \n in a search of Amazon EBS backup metadata.
"
+ }
+ },
+ "LastModifiedTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "These are one or more items in the \n results that match values for Last Modified Time returned \n in a search of Amazon EBS backup metadata.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "These are the items returned in the results of \n a search of Amazon EBS backup metadata.
"
+ }
+ },
+ "com.amazonaws.backupsearch#EncryptionKeyArn": {
+ "type": "string",
+ "traits": {
+ "aws.api#arnReference": {
+ "type": "AWS::KMS::Key"
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#ExportJobArn": {
+ "type": "string",
+ "traits": {
+ "aws.api#arnReference": {
+ "service": "com.amazonaws.backupsearch#CryoBackupSearchService"
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#ExportJobStatus": {
+ "type": "enum",
+ "members": {
+ "RUNNING": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "RUNNING"
+ }
+ },
+ "FAILED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "FAILED"
+ }
+ },
+ "COMPLETED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "COMPLETED"
+ }
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#ExportJobSummaries": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.backupsearch#ExportJobSummary"
+ }
+ },
+ "com.amazonaws.backupsearch#ExportJobSummary": {
+ "type": "structure",
+ "members": {
+ "ExportJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId",
+ "traits": {
+ "smithy.api#documentation": "This is the unique string that identifies a \n specific export job.
",
+ "smithy.api#required": {}
+ }
+ },
+ "ExportJobArn": {
+ "target": "com.amazonaws.backupsearch#ExportJobArn",
+ "traits": {
+ "smithy.api#documentation": "This is the unique ARN (Amazon Resource Name) that \n belongs to the new export job.
"
+ }
+ },
+ "Status": {
+ "target": "com.amazonaws.backupsearch#ExportJobStatus",
+ "traits": {
+ "smithy.api#documentation": "The status of the export job is one of the \n following:
\n \n CREATED
; RUNNING
; \n FAILED
; or COMPLETED
.
"
+ }
+ },
+ "CreationTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "This is a timestamp of the time the export job \n was created.
"
+ }
+ },
+ "CompletionTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "This is a timestamp of the time the export job \n compeleted.
"
+ }
+ },
+ "StatusMessage": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "A status message is a string that is returned for an export\n job.
\n A status message is included for any status other \n than COMPLETED
without issues.
"
+ }
+ },
+ "SearchJobArn": {
+ "target": "com.amazonaws.backupsearch#SearchJobArn",
+ "traits": {
+ "smithy.api#documentation": "The unique string that identifies the Amazon Resource \n Name (ARN) of the specified search job.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This is the summary of an export job.
"
+ }
+ },
+ "com.amazonaws.backupsearch#ExportSpecification": {
+ "type": "union",
+ "members": {
+ "s3ExportSpecification": {
+ "target": "com.amazonaws.backupsearch#S3ExportSpecification",
+ "traits": {
+ "smithy.api#documentation": "This specifies the destination Amazon S3 \n bucket for the export job. And, if included, it also \n specifies the destination prefix.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This contains the export specification object.
"
+ }
+ },
+ "com.amazonaws.backupsearch#FilePath": {
+ "type": "string",
+ "traits": {
+ "smithy.api#sensitive": {}
+ }
+ },
+ "com.amazonaws.backupsearch#GenericId": {
+ "type": "string"
+ },
+ "com.amazonaws.backupsearch#GetSearchJob": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backupsearch#GetSearchJobInput"
+ },
+ "output": {
+ "target": "com.amazonaws.backupsearch#GetSearchJobOutput"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backupsearch#ResourceNotFoundException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "This operation retrieves metadata of a search job, \n including its progress.
",
+ "smithy.api#http": {
+ "code": 200,
+ "method": "GET",
+ "uri": "/search-jobs/{SearchJobIdentifier}"
+ },
+ "smithy.api#readonly": {}
+ }
+ },
+ "com.amazonaws.backupsearch#GetSearchJobInput": {
+ "type": "structure",
+ "members": {
+ "SearchJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId",
+ "traits": {
+ "smithy.api#documentation": "Required unique string that specifies the \n search job.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backupsearch#GetSearchJobOutput": {
+ "type": "structure",
+ "members": {
+ "Name": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Returned name of the specified search job.
"
+ }
+ },
+ "SearchScopeSummary": {
+ "target": "com.amazonaws.backupsearch#SearchScopeSummary",
+ "traits": {
+ "smithy.api#documentation": "Returned summary of the specified search job scope, \n including:\n
\n \n - \n
TotalBackupsToScanCount, the number of \n recovery points returned by the search.
\n \n - \n
TotalItemsToScanCount, the number of \n items returned by the search.
\n \n
"
+ }
+ },
+ "CurrentSearchProgress": {
+ "target": "com.amazonaws.backupsearch#CurrentSearchProgress",
+ "traits": {
+ "smithy.api#documentation": "Returns numbers representing BackupsScannedCount, \n ItemsScanned, and ItemsMatched.
"
+ }
+ },
+ "StatusMessage": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "A status message will be returned for either a \n earch job with a status of ERRORED
or a status of \n COMPLETED
jobs with issues.
\n For example, a message may say that a search \n contained recovery points unable to be scanned because \n of a permissions issue.
"
+ }
+ },
+ "EncryptionKeyArn": {
+ "target": "com.amazonaws.backupsearch#EncryptionKeyArn",
+ "traits": {
+ "smithy.api#documentation": "The encryption key for the specified \n search job.
\n Example: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
.
"
+ }
+ },
+ "CompletionTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "The date and time that a search job completed, in Unix format and Coordinated\n Universal Time (UTC). The value of CompletionTime
is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
"
+ }
+ },
+ "Status": {
+ "target": "com.amazonaws.backupsearch#SearchJobState",
+ "traits": {
+ "smithy.api#documentation": "The current status of the specified search job.
\n A search job may have one of the following statuses: \n RUNNING
; COMPLETED
; STOPPED
; \n FAILED
; TIMED_OUT
; or EXPIRED
\n .
",
+ "smithy.api#required": {}
+ }
+ },
+ "SearchScope": {
+ "target": "com.amazonaws.backupsearch#SearchScope",
+ "traits": {
+ "smithy.api#documentation": "The search scope is all backup \n properties input into a search.
",
+ "smithy.api#required": {}
+ }
+ },
+ "ItemFilters": {
+ "target": "com.amazonaws.backupsearch#ItemFilters",
+ "traits": {
+ "smithy.api#documentation": "Item Filters represent all input item \n properties specified when the search was \n created.
",
+ "smithy.api#required": {}
+ }
+ },
+ "CreationTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "The date and time that a search job was created, in Unix format and Coordinated\n Universal Time (UTC). The value of CompletionTime
is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
",
+ "smithy.api#required": {}
+ }
+ },
+ "SearchJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId",
+ "traits": {
+ "smithy.api#documentation": "The unique string that identifies the specified search job.
",
+ "smithy.api#required": {}
+ }
+ },
+ "SearchJobArn": {
+ "target": "com.amazonaws.backupsearch#SearchJobArn",
+ "traits": {
+ "smithy.api#documentation": "The unique string that identifies the Amazon Resource \n Name (ARN) of the specified search job.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.backupsearch#GetSearchResultExportJob": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backupsearch#GetSearchResultExportJobInput"
+ },
+ "output": {
+ "target": "com.amazonaws.backupsearch#GetSearchResultExportJobOutput"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backupsearch#ResourceNotFoundException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "This operation retrieves the metadata of an export job.
\n An export job is an operation that transmits the results \n of a search job to a specified S3 bucket in a \n .csv file.
\n An export job allows you to retain results of a search \n beyond the search job's scheduled retention of 7 days.
",
+ "smithy.api#http": {
+ "code": 200,
+ "method": "GET",
+ "uri": "/export-search-jobs/{ExportJobIdentifier}"
+ },
+ "smithy.api#readonly": {}
+ }
+ },
+ "com.amazonaws.backupsearch#GetSearchResultExportJobInput": {
+ "type": "structure",
+ "members": {
+ "ExportJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId",
+ "traits": {
+ "smithy.api#documentation": "This is the unique string that identifies a \n specific export job.
\n Required for this operation.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backupsearch#GetSearchResultExportJobOutput": {
+ "type": "structure",
+ "members": {
+ "ExportJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId",
+ "traits": {
+ "smithy.api#documentation": "This is the unique string that identifies the \n specified export job.
",
+ "smithy.api#required": {}
+ }
+ },
+ "ExportJobArn": {
+ "target": "com.amazonaws.backupsearch#ExportJobArn",
+ "traits": {
+ "smithy.api#documentation": "The unique Amazon Resource Name (ARN) that uniquely identifies \n the export job.
"
+ }
+ },
+ "Status": {
+ "target": "com.amazonaws.backupsearch#ExportJobStatus",
+ "traits": {
+ "smithy.api#documentation": "This is the current status of the export job.
"
+ }
+ },
+ "CreationTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "The date and time that an export job was created, in Unix format and Coordinated Universal\n Time (UTC). The value of CreationTime
is accurate to milliseconds. For\n example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
"
+ }
+ },
+ "CompletionTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "The date and time that an export job completed, in Unix format and Coordinated Universal\n Time (UTC). The value of CreationTime
is accurate to milliseconds. For\n example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
"
+ }
+ },
+ "StatusMessage": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "A status message is a string that is returned for search job \n with a status of FAILED
, along with steps to remedy \n and retry the operation.
"
+ }
+ },
+ "ExportSpecification": {
+ "target": "com.amazonaws.backupsearch#ExportSpecification",
+ "traits": {
+ "smithy.api#documentation": "The export specification consists of the destination \n S3 bucket to which the search results were exported, along \n with the destination prefix.
"
+ }
+ },
+ "SearchJobArn": {
+ "target": "com.amazonaws.backupsearch#SearchJobArn",
+ "traits": {
+ "smithy.api#documentation": "The unique string that identifies the Amazon Resource \n Name (ARN) of the specified search job.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.backupsearch#IamRoleArn": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 20,
+ "max": 2048
+ },
+ "smithy.api#pattern": "^arn:(?:aws|aws-cn|aws-us-gov):iam::[a-z0-9-]+:role/(.+)$"
+ }
+ },
+ "com.amazonaws.backupsearch#InternalServerException": {
+ "type": "structure",
+ "members": {
+ "message": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Unexpected error during processing of request.
",
+ "smithy.api#required": {}
+ }
+ },
+ "retryAfterSeconds": {
+ "target": "smithy.api#Integer",
+ "traits": {
+ "smithy.api#documentation": "Retry the call after number of seconds.
",
+ "smithy.api#httpHeader": "Retry-After"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "An internal server error occurred. Retry your request.
",
+ "smithy.api#error": "server",
+ "smithy.api#httpError": 500,
+ "smithy.api#retryable": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ItemFilters": {
+ "type": "structure",
+ "members": {
+ "S3ItemFilters": {
+ "target": "com.amazonaws.backupsearch#S3ItemFilters",
+ "traits": {
+ "smithy.api#documentation": "This array can contain CreationTimes, ETags, \n ObjectKeys, Sizes, or VersionIds objects.
"
+ }
+ },
+ "EBSItemFilters": {
+ "target": "com.amazonaws.backupsearch#EBSItemFilters",
+ "traits": {
+ "smithy.api#documentation": "This array can contain CreationTimes, \n FilePaths, LastModificationTimes, or Sizes objects.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Item Filters represent all input item \n properties specified when the search was \n created.
\n Contains either EBSItemFilters or \n S3ItemFilters
"
+ }
+ },
+ "com.amazonaws.backupsearch#ListSearchJobBackups": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backupsearch#ListSearchJobBackupsInput"
+ },
+ "output": {
+ "target": "com.amazonaws.backupsearch#ListSearchJobBackupsOutput"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backupsearch#ResourceNotFoundException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "This operation returns a list of all backups (recovery \n points) in a paginated format that were included in \n the search job.
\n If a search does not display an expected backup in \n the results, you can call this operation to display each \n backup included in the search. Any backups that were not \n included because they have a FAILED
status \n from a permissions issue will be displayed, along with a \n status message.
\n Only recovery points with a backup index that has \n a status of ACTIVE
will be included in search results. \n If the index has any other status, its status will be \n displayed along with a status message.
",
+ "smithy.api#http": {
+ "code": 200,
+ "method": "GET",
+ "uri": "/search-jobs/{SearchJobIdentifier}/backups"
+ },
+ "smithy.api#paginated": {
+ "inputToken": "NextToken",
+ "outputToken": "NextToken",
+ "pageSize": "MaxResults",
+ "items": "Results"
+ },
+ "smithy.api#readonly": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ListSearchJobBackupsInput": {
+ "type": "structure",
+ "members": {
+ "SearchJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId",
+ "traits": {
+ "smithy.api#documentation": "The unique string that specifies the search job.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "NextToken": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The next item following a partial list of returned backups \n included in a search job.
\n For example, if a request\n is made to return MaxResults
number of backups, NextToken
\n allows you to return more items in your list starting at the location pointed to by the\n next token.
",
+ "smithy.api#httpQuery": "nextToken"
+ }
+ },
+ "MaxResults": {
+ "target": "smithy.api#Integer",
+ "traits": {
+ "smithy.api#default": 1000,
+ "smithy.api#documentation": "The maximum number of resource list items to be returned.
",
+ "smithy.api#httpQuery": "maxResults",
+ "smithy.api#range": {
+ "min": 1,
+ "max": 1000
+ }
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ListSearchJobBackupsOutput": {
+ "type": "structure",
+ "members": {
+ "Results": {
+ "target": "com.amazonaws.backupsearch#SearchJobBackupsResults",
+ "traits": {
+ "smithy.api#documentation": "The recovery points returned the results of a \n search job
",
+ "smithy.api#required": {}
+ }
+ },
+ "NextToken": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The next item following a partial list of returned backups \n included in a search job.
\n For example, if a request\n is made to return MaxResults
number of backups, NextToken
\n allows you to return more items in your list starting at the location pointed to by the\n next token.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ListSearchJobResults": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backupsearch#ListSearchJobResultsInput"
+ },
+ "output": {
+ "target": "com.amazonaws.backupsearch#ListSearchJobResultsOutput"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backupsearch#ResourceNotFoundException"
+ }
+ ],
+ "traits": {
+ "aws.api#dataPlane": {},
+ "smithy.api#documentation": "This operation returns a list of a specified search job.
",
+ "smithy.api#http": {
+ "code": 200,
+ "method": "GET",
+ "uri": "/search-jobs/{SearchJobIdentifier}/search-results"
+ },
+ "smithy.api#paginated": {
+ "items": "Results"
+ },
+ "smithy.api#readonly": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ListSearchJobResultsInput": {
+ "type": "structure",
+ "members": {
+ "SearchJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId",
+ "traits": {
+ "smithy.api#documentation": "The unique string that specifies the search job.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "NextToken": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The next item following a partial list of returned \n search job results.
\n For example, if a request\n is made to return MaxResults
number of \n search job results, NextToken
\n allows you to return more items in your list starting at the location pointed to by the\n next token.
",
+ "smithy.api#httpQuery": "nextToken"
+ }
+ },
+ "MaxResults": {
+ "target": "smithy.api#Integer",
+ "traits": {
+ "smithy.api#default": 1000,
+ "smithy.api#documentation": "The maximum number of resource list items to be returned.
",
+ "smithy.api#httpQuery": "maxResults",
+ "smithy.api#range": {
+ "min": 1,
+ "max": 1000
+ }
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ListSearchJobResultsOutput": {
+ "type": "structure",
+ "members": {
+ "Results": {
+ "target": "com.amazonaws.backupsearch#Results",
+ "traits": {
+ "smithy.api#documentation": "The results consist of either EBSResultItem or S3ResultItem.
",
+ "smithy.api#required": {}
+ }
+ },
+ "NextToken": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The next item following a partial list of \n search job results.
\n For example, if a request\n is made to return MaxResults
number of backups, NextToken
\n allows you to return more items in your list starting at the location pointed to by the\n next token.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ListSearchJobs": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backupsearch#ListSearchJobsInput"
+ },
+ "output": {
+ "target": "com.amazonaws.backupsearch#ListSearchJobsOutput"
+ },
+ "traits": {
+ "smithy.api#documentation": "This operation returns a list of search jobs belonging \n to an account.
",
+ "smithy.api#http": {
+ "code": 200,
+ "method": "GET",
+ "uri": "/search-jobs"
+ },
+ "smithy.api#paginated": {
+ "items": "SearchJobs"
+ },
+ "smithy.api#readonly": {},
+ "smithy.test#smokeTests": [
+ {
+ "id": "ListSearchJobsSuccess",
+ "params": {},
+ "expect": {
+ "success": {}
+ },
+ "vendorParamsShape": "aws.test#AwsVendorParams",
+ "vendorParams": {
+ "region": "us-east-1"
+ }
+ }
+ ]
+ }
+ },
+ "com.amazonaws.backupsearch#ListSearchJobsInput": {
+ "type": "structure",
+ "members": {
+ "ByStatus": {
+ "target": "com.amazonaws.backupsearch#SearchJobState",
+ "traits": {
+ "smithy.api#documentation": "Include this parameter to filter list by search \n job status.
",
+ "smithy.api#httpQuery": "Status",
+ "smithy.api#notProperty": {}
+ }
+ },
+ "NextToken": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The next item following a partial list of returned \n search jobs.
\n For example, if a request\n is made to return MaxResults
number of backups, NextToken
\n allows you to return more items in your list starting at the location pointed to by the\n next token.
",
+ "smithy.api#httpQuery": "NextToken",
+ "smithy.api#notProperty": {}
+ }
+ },
+ "MaxResults": {
+ "target": "smithy.api#Integer",
+ "traits": {
+ "smithy.api#default": 1000,
+ "smithy.api#documentation": "The maximum number of resource list items to be returned.
",
+ "smithy.api#httpQuery": "MaxResults",
+ "smithy.api#notProperty": {},
+ "smithy.api#range": {
+ "min": 1,
+ "max": 1000
+ }
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ListSearchJobsOutput": {
+ "type": "structure",
+ "members": {
+ "SearchJobs": {
+ "target": "com.amazonaws.backupsearch#SearchJobs",
+ "traits": {
+ "smithy.api#documentation": "The search jobs among the list, with details of \n the returned search jobs.
",
+ "smithy.api#notProperty": {},
+ "smithy.api#required": {}
+ }
+ },
+ "NextToken": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The next item following a partial list of returned backups \n included in a search job.
\n For example, if a request\n is made to return MaxResults
number of backups, NextToken
\n allows you to return more items in your list starting at the location pointed to by the\n next token.
",
+ "smithy.api#notProperty": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ListSearchResultExportJobs": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backupsearch#ListSearchResultExportJobsInput"
+ },
+ "output": {
+ "target": "com.amazonaws.backupsearch#ListSearchResultExportJobsOutput"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backupsearch#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.backupsearch#ServiceQuotaExceededException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "This operation exports search results of a search job \n to a specified destination S3 bucket.
",
+ "smithy.api#http": {
+ "code": 200,
+ "method": "GET",
+ "uri": "/export-search-jobs"
+ },
+ "smithy.api#paginated": {
+ "items": "ExportJobs"
+ },
+ "smithy.api#readonly": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ListSearchResultExportJobsInput": {
+ "type": "structure",
+ "members": {
+ "Status": {
+ "target": "com.amazonaws.backupsearch#ExportJobStatus",
+ "traits": {
+ "smithy.api#documentation": "The search jobs to be included in the export job \n can be filtered by including this parameter.
",
+ "smithy.api#httpQuery": "Status",
+ "smithy.api#notProperty": {}
+ }
+ },
+ "SearchJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId",
+ "traits": {
+ "smithy.api#documentation": "The unique string that specifies the search job.
",
+ "smithy.api#httpQuery": "SearchJobIdentifier",
+ "smithy.api#notProperty": {}
+ }
+ },
+ "NextToken": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The next item following a partial list of returned backups \n included in a search job.
\n For example, if a request\n is made to return MaxResults
number of backups, NextToken
\n allows you to return more items in your list starting at the location pointed to by the\n next token.
",
+ "smithy.api#httpQuery": "NextToken",
+ "smithy.api#notProperty": {}
+ }
+ },
+ "MaxResults": {
+ "target": "smithy.api#Integer",
+ "traits": {
+ "smithy.api#default": 1000,
+ "smithy.api#documentation": "The maximum number of resource list items to be returned.
",
+ "smithy.api#httpQuery": "MaxResults",
+ "smithy.api#notProperty": {},
+ "smithy.api#range": {
+ "min": 1,
+ "max": 1000
+ }
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ListSearchResultExportJobsOutput": {
+ "type": "structure",
+ "members": {
+ "ExportJobs": {
+ "target": "com.amazonaws.backupsearch#ExportJobSummaries",
+ "traits": {
+ "smithy.api#documentation": "The operation returns the included export jobs.
",
+ "smithy.api#notProperty": {},
+ "smithy.api#required": {}
+ }
+ },
+ "NextToken": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The next item following a partial list of returned backups \n included in a search job.
\n For example, if a request\n is made to return MaxResults
number of backups, NextToken
\n allows you to return more items in your list starting at the location pointed to by the\n next token.
",
+ "smithy.api#notProperty": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ListTagsForResource": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backupsearch#ListTagsForResourceRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.backupsearch#ListTagsForResourceResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backupsearch#ResourceNotFoundException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "This operation returns the tags for a resource type.
",
+ "smithy.api#http": {
+ "uri": "/tags/{ResourceArn}",
+ "method": "GET"
+ },
+ "smithy.api#readonly": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ListTagsForResourceRequest": {
+ "type": "structure",
+ "members": {
+ "ResourceArn": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The Amazon Resource Name (ARN) that uniquely identifies \n the resource.>
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ListTagsForResourceResponse": {
+ "type": "structure",
+ "members": {
+ "Tags": {
+ "target": "com.amazonaws.backupsearch#TagMap",
+ "traits": {
+ "smithy.api#documentation": "List of tags returned by the operation.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.backupsearch#LongCondition": {
+ "type": "structure",
+ "members": {
+ "Value": {
+ "target": "smithy.api#Long",
+ "traits": {
+ "smithy.api#documentation": "The value of an item included in one of the search \n item filters.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Operator": {
+ "target": "com.amazonaws.backupsearch#LongConditionOperator",
+ "traits": {
+ "smithy.api#default": "EQUALS_TO",
+ "smithy.api#documentation": "A string that defines what values will be \n returned.
\n If this is included, avoid combinations of \n operators that will return all possible values. \n For example, including both EQUALS_TO
\n and NOT_EQUALS_TO
with a value of 4
\n will return all values.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The long condition contains a Value
\n and can optionally contain an Operator
.
"
+ }
+ },
+ "com.amazonaws.backupsearch#LongConditionList": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.backupsearch#LongCondition"
+ },
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 10
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#LongConditionOperator": {
+ "type": "enum",
+ "members": {
+ "EQUALS_TO": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "EQUALS_TO"
+ }
+ },
+ "NOT_EQUALS_TO": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "NOT_EQUALS_TO"
+ }
+ },
+ "LESS_THAN_EQUAL_TO": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "LESS_THAN_EQUAL_TO"
+ }
+ },
+ "GREATER_THAN_EQUAL_TO": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "GREATER_THAN_EQUAL_TO"
+ }
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#ObjectKey": {
+ "type": "string",
+ "traits": {
+ "smithy.api#sensitive": {}
+ }
+ },
+ "com.amazonaws.backupsearch#RecoveryPoint": {
+ "type": "string",
+ "traits": {
+ "aws.api#arnReference": {
+ "type": "AWS::Backup::RecoveryPoint"
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#RecoveryPointArnList": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.backupsearch#RecoveryPoint"
+ },
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 50
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#ResourceArnList": {
+ "type": "list",
+ "member": {
+ "target": "smithy.api#String"
+ },
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 50
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#ResourceNotFoundException": {
+ "type": "structure",
+ "members": {
+ "message": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Request references a resource which does not exist.
",
+ "smithy.api#required": {}
+ }
+ },
+ "resourceId": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Hypothetical identifier of the resource affected.
",
+ "smithy.api#required": {}
+ }
+ },
+ "resourceType": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Hypothetical type of the resource affected.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The resource was not found for this request.
\n Confirm the resource information, such as the ARN or type is correct \n and exists, then retry the request.
",
+ "smithy.api#error": "client",
+ "smithy.api#httpError": 404
+ }
+ },
+ "com.amazonaws.backupsearch#ResourceType": {
+ "type": "enum",
+ "members": {
+ "S3": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "S3"
+ }
+ },
+ "EBS": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "EBS"
+ }
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#ResourceTypeList": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.backupsearch#ResourceType"
+ },
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 1
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#ResultItem": {
+ "type": "union",
+ "members": {
+ "S3ResultItem": {
+ "target": "com.amazonaws.backupsearch#S3ResultItem",
+ "traits": {
+ "smithy.api#documentation": "These are items returned in the search results \n of an Amazon S3 search.
"
+ }
+ },
+ "EBSResultItem": {
+ "target": "com.amazonaws.backupsearch#EBSResultItem",
+ "traits": {
+ "smithy.api#documentation": "These are items returned in the search results \n of an Amazon EBS search.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This is an object representing the item \n returned in the results of a search for a specific \n resource type.
"
+ }
+ },
+ "com.amazonaws.backupsearch#Results": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.backupsearch#ResultItem"
+ }
+ },
+ "com.amazonaws.backupsearch#S3ExportSpecification": {
+ "type": "structure",
+ "members": {
+ "DestinationBucket": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "This specifies the destination Amazon S3 \n bucket for the export job.
",
+ "smithy.api#required": {}
+ }
+ },
+ "DestinationPrefix": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "This specifies the prefix for the destination \n Amazon S3 bucket for the export job.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This specification contains a required string of the \n destination bucket; optionally, you can include the \n destination prefix.
"
+ }
+ },
+ "com.amazonaws.backupsearch#S3ItemFilter": {
+ "type": "structure",
+ "members": {
+ "ObjectKeys": {
+ "target": "com.amazonaws.backupsearch#StringConditionList",
+ "traits": {
+ "smithy.api#documentation": "You can include 1 to 10 values.
\n If one value is included, the results will \n return only items that match the value.
\n If more than one value is included, the \n results will return all items that match any of the \n values.
"
+ }
+ },
+ "Sizes": {
+ "target": "com.amazonaws.backupsearch#LongConditionList",
+ "traits": {
+ "smithy.api#documentation": "You can include 1 to 10 values.
\n If one value is included, the results will \n return only items that match the value.
\n If more than one value is included, the \n results will return all items that match any of the \n values.
"
+ }
+ },
+ "CreationTimes": {
+ "target": "com.amazonaws.backupsearch#TimeConditionList",
+ "traits": {
+ "smithy.api#documentation": "You can include 1 to 10 values.
\n If one value is included, the results will \n return only items that match the value.
\n If more than one value is included, the \n results will return all items that match any of the \n values.
"
+ }
+ },
+ "VersionIds": {
+ "target": "com.amazonaws.backupsearch#StringConditionList",
+ "traits": {
+ "smithy.api#documentation": "You can include 1 to 10 values.
\n If one value is included, the results will \n return only items that match the value.
\n If more than one value is included, the \n results will return all items that match any of the \n values.
"
+ }
+ },
+ "ETags": {
+ "target": "com.amazonaws.backupsearch#StringConditionList",
+ "traits": {
+ "smithy.api#documentation": "You can include 1 to 10 values.
\n If one value is included, the results will \n return only items that match the value.
\n If more than one value is included, the \n results will return all items that match any of the \n values.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This contains arrays of objects, which may include \n ObjectKeys, Sizes, CreationTimes, VersionIds, and/or \n Etags.
"
+ }
+ },
+ "com.amazonaws.backupsearch#S3ItemFilters": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.backupsearch#S3ItemFilter"
+ },
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 10
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#S3ResultItem": {
+ "type": "structure",
+ "members": {
+ "BackupResourceArn": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "These are items in the returned results that match \n recovery point Amazon Resource Names (ARN) input during \n a search of Amazon S3 backup metadata.
"
+ }
+ },
+ "SourceResourceArn": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "These are items in the returned results that match \n source Amazon Resource Names (ARN) input during \n a search of Amazon S3 backup metadata.
"
+ }
+ },
+ "BackupVaultName": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The name of the backup vault.
"
+ }
+ },
+ "ObjectKey": {
+ "target": "com.amazonaws.backupsearch#ObjectKey",
+ "traits": {
+ "smithy.api#documentation": "This is one or more items \n returned in the results of a search of Amazon S3 \n backup metadata that match the values input for \n object key.
"
+ }
+ },
+ "ObjectSize": {
+ "target": "smithy.api#Long",
+ "traits": {
+ "smithy.api#documentation": "These are items in the returned results that match \n values for object size(s) input during a search of \n Amazon S3 backup metadata.
"
+ }
+ },
+ "CreationTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "These are one or more items in the returned results \n that match values for item creation time input during \n a search of Amazon S3 backup metadata.
"
+ }
+ },
+ "ETag": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "These are one or more items in the returned results \n that match values for ETags input during \n a search of Amazon S3 backup metadata.
"
+ }
+ },
+ "VersionId": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "These are one or more items in the returned results \n that match values for version IDs input during \n a search of Amazon S3 backup metadata.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "These are the items returned in the results of \n a search of Amazon S3 backup metadata.
"
+ }
+ },
+ "com.amazonaws.backupsearch#SearchJob": {
+ "type": "resource",
+ "identifiers": {
+ "SearchJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId"
+ }
+ },
+ "properties": {
+ "Status": {
+ "target": "com.amazonaws.backupsearch#SearchJobState"
+ },
+ "Name": {
+ "target": "smithy.api#String"
+ },
+ "EncryptionKeyArn": {
+ "target": "com.amazonaws.backupsearch#EncryptionKeyArn"
+ },
+ "SearchScope": {
+ "target": "com.amazonaws.backupsearch#SearchScope"
+ },
+ "ItemFilters": {
+ "target": "com.amazonaws.backupsearch#ItemFilters"
+ },
+ "CreationTime": {
+ "target": "smithy.api#Timestamp"
+ },
+ "CompletionTime": {
+ "target": "smithy.api#Timestamp"
+ },
+ "SearchScopeSummary": {
+ "target": "com.amazonaws.backupsearch#SearchScopeSummary"
+ },
+ "CurrentSearchProgress": {
+ "target": "com.amazonaws.backupsearch#CurrentSearchProgress"
+ },
+ "StatusMessage": {
+ "target": "smithy.api#String"
+ },
+ "ClientToken": {
+ "target": "smithy.api#String"
+ },
+ "Tags": {
+ "target": "com.amazonaws.backupsearch#TagMap"
+ },
+ "SearchJobArn": {
+ "target": "com.amazonaws.backupsearch#SearchJobArn"
+ }
+ },
+ "create": {
+ "target": "com.amazonaws.backupsearch#StartSearchJob"
+ },
+ "read": {
+ "target": "com.amazonaws.backupsearch#GetSearchJob"
+ },
+ "update": {
+ "target": "com.amazonaws.backupsearch#StopSearchJob"
+ },
+ "list": {
+ "target": "com.amazonaws.backupsearch#ListSearchJobs"
+ }
+ },
+ "com.amazonaws.backupsearch#SearchJobArn": {
+ "type": "string",
+ "traits": {
+ "aws.api#arnReference": {
+ "service": "com.amazonaws.backupsearch#CryoBackupSearchService"
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#SearchJobBackupsResult": {
+ "type": "structure",
+ "members": {
+ "Status": {
+ "target": "com.amazonaws.backupsearch#SearchJobState",
+ "traits": {
+ "smithy.api#documentation": "This is the status of the search job backup result.
"
+ }
+ },
+ "StatusMessage": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "This is the status message included with the results.
"
+ }
+ },
+ "ResourceType": {
+ "target": "com.amazonaws.backupsearch#ResourceType",
+ "traits": {
+ "smithy.api#documentation": "This is the resource type of the search.
"
+ }
+ },
+ "BackupResourceArn": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The Amazon Resource Name (ARN) that uniquely identifies \n the backup resources.
"
+ }
+ },
+ "SourceResourceArn": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The Amazon Resource Name (ARN) that uniquely identifies \n the source resources.
"
+ }
+ },
+ "IndexCreationTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "This is the creation time of the backup index.
"
+ }
+ },
+ "BackupCreationTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "This is the creation time of the backup (recovery point).
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This contains the information about recovery \n points returned in results of a search job.
"
+ }
+ },
+ "com.amazonaws.backupsearch#SearchJobBackupsResults": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.backupsearch#SearchJobBackupsResult"
+ }
+ },
+ "com.amazonaws.backupsearch#SearchJobState": {
+ "type": "enum",
+ "members": {
+ "RUNNING": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "RUNNING"
+ }
+ },
+ "COMPLETED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "COMPLETED"
+ }
+ },
+ "STOPPING": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "STOPPING"
+ }
+ },
+ "STOPPED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "STOPPED"
+ }
+ },
+ "FAILED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "FAILED"
+ }
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#SearchJobSummary": {
+ "type": "structure",
+ "members": {
+ "SearchJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId",
+ "traits": {
+ "smithy.api#documentation": "The unique string that specifies the search job.
"
+ }
+ },
+ "SearchJobArn": {
+ "target": "com.amazonaws.backupsearch#SearchJobArn",
+ "traits": {
+ "smithy.api#documentation": "The unique string that identifies the Amazon Resource \n Name (ARN) of the specified search job.
"
+ }
+ },
+ "Name": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "This is the name of the search job.
"
+ }
+ },
+ "Status": {
+ "target": "com.amazonaws.backupsearch#SearchJobState",
+ "traits": {
+ "smithy.api#documentation": "This is the status of the search job.
"
+ }
+ },
+ "CreationTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "This is the creation time of the search job.
"
+ }
+ },
+ "CompletionTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "This is the completion time of the search job.
"
+ }
+ },
+ "SearchScopeSummary": {
+ "target": "com.amazonaws.backupsearch#SearchScopeSummary",
+ "traits": {
+ "smithy.api#documentation": "Returned summary of the specified search job scope, \n including:\n
\n \n - \n
TotalBackupsToScanCount, the number of \n recovery points returned by the search.
\n \n - \n
TotalItemsToScanCount, the number of \n items returned by the search.
\n \n
"
+ }
+ },
+ "StatusMessage": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "A status message will be returned for either a \n earch job with a status of ERRORED
or a status of \n COMPLETED
jobs with issues.
\n For example, a message may say that a search \n contained recovery points unable to be scanned because \n of a permissions issue.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This is information pertaining to a search job.
"
+ }
+ },
+ "com.amazonaws.backupsearch#SearchJobs": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.backupsearch#SearchJobSummary"
+ }
+ },
+ "com.amazonaws.backupsearch#SearchResultExportJob": {
+ "type": "resource",
+ "identifiers": {
+ "ExportJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId"
+ }
+ },
+ "properties": {
+ "ExportJobArn": {
+ "target": "com.amazonaws.backupsearch#ExportJobArn"
+ },
+ "SearchJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId"
+ },
+ "SearchJobArn": {
+ "target": "com.amazonaws.backupsearch#SearchJobArn"
+ },
+ "Status": {
+ "target": "com.amazonaws.backupsearch#ExportJobStatus"
+ },
+ "StatusMessage": {
+ "target": "smithy.api#String"
+ },
+ "CreationTime": {
+ "target": "smithy.api#Timestamp"
+ },
+ "CompletionTime": {
+ "target": "smithy.api#Timestamp"
+ },
+ "ExportSpecification": {
+ "target": "com.amazonaws.backupsearch#ExportSpecification"
+ },
+ "ClientToken": {
+ "target": "smithy.api#String"
+ },
+ "Tags": {
+ "target": "com.amazonaws.backupsearch#TagMap"
+ },
+ "RoleArn": {
+ "target": "com.amazonaws.backupsearch#IamRoleArn"
+ }
+ },
+ "create": {
+ "target": "com.amazonaws.backupsearch#StartSearchResultExportJob"
+ },
+ "read": {
+ "target": "com.amazonaws.backupsearch#GetSearchResultExportJob"
+ },
+ "list": {
+ "target": "com.amazonaws.backupsearch#ListSearchResultExportJobs"
+ }
+ },
+ "com.amazonaws.backupsearch#SearchScope": {
+ "type": "structure",
+ "members": {
+ "BackupResourceTypes": {
+ "target": "com.amazonaws.backupsearch#ResourceTypeList",
+ "traits": {
+ "smithy.api#documentation": "The resource types included in a search.
\n Eligible resource types include S3 and EBS.
",
+ "smithy.api#required": {}
+ }
+ },
+ "BackupResourceCreationTime": {
+ "target": "com.amazonaws.backupsearch#BackupCreationTimeFilter",
+ "traits": {
+ "smithy.api#documentation": "This is the time a backup resource was created.
"
+ }
+ },
+ "SourceResourceArns": {
+ "target": "com.amazonaws.backupsearch#ResourceArnList",
+ "traits": {
+ "smithy.api#documentation": "The Amazon Resource Name (ARN) that uniquely identifies \n the source resources.
"
+ }
+ },
+ "BackupResourceArns": {
+ "target": "com.amazonaws.backupsearch#RecoveryPointArnList",
+ "traits": {
+ "smithy.api#documentation": "The Amazon Resource Name (ARN) that uniquely identifies \n the backup resources.
"
+ }
+ },
+ "BackupResourceTags": {
+ "target": "com.amazonaws.backupsearch#TagMap",
+ "traits": {
+ "smithy.api#documentation": "These are one or more tags on the backup (recovery \n point).
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The search scope is all backup \n properties input into a search.
"
+ }
+ },
+ "com.amazonaws.backupsearch#SearchScopeSummary": {
+ "type": "structure",
+ "members": {
+ "TotalRecoveryPointsToScanCount": {
+ "target": "smithy.api#Integer",
+ "traits": {
+ "smithy.api#documentation": "This is the count of the total number of backups \n that will be scanned in a search.
"
+ }
+ },
+ "TotalItemsToScanCount": {
+ "target": "smithy.api#Long",
+ "traits": {
+ "smithy.api#documentation": "This is the count of the total number of items \n that will be scanned in a search.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The summary of the specified search job scope, \n including:\n
\n \n - \n
TotalBackupsToScanCount, the number of \n recovery points returned by the search.
\n \n - \n
TotalItemsToScanCount, the number of \n items returned by the search.
\n \n
"
+ }
+ },
+ "com.amazonaws.backupsearch#ServiceQuotaExceededException": {
+ "type": "structure",
+ "members": {
+ "message": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "This request was not successful due to a service quota exceeding limits.
",
+ "smithy.api#required": {}
+ }
+ },
+ "resourceId": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Identifier of the resource.
",
+ "smithy.api#required": {}
+ }
+ },
+ "resourceType": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Type of resource.
",
+ "smithy.api#required": {}
+ }
+ },
+ "serviceCode": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "This is the code unique to the originating service with the quota.
",
+ "smithy.api#required": {}
+ }
+ },
+ "quotaCode": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "This is the code specific to the quota type.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The request denied due to exceeding the quota limits permitted.
",
+ "smithy.api#error": "client",
+ "smithy.api#httpError": 402
+ }
+ },
+ "com.amazonaws.backupsearch#StartSearchJob": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backupsearch#StartSearchJobInput"
+ },
+ "output": {
+ "target": "com.amazonaws.backupsearch#StartSearchJobOutput"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backupsearch#ConflictException"
+ },
+ {
+ "target": "com.amazonaws.backupsearch#ServiceQuotaExceededException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "This operation creates a search job which returns \n recovery points filtered by SearchScope and items \n filtered by ItemFilters.
\n You can optionally include ClientToken, \n EncryptionKeyArn, Name, and/or Tags.
",
+ "smithy.api#http": {
+ "code": 200,
+ "method": "PUT",
+ "uri": "/search-jobs"
+ },
+ "smithy.api#idempotent": {}
+ }
+ },
+ "com.amazonaws.backupsearch#StartSearchJobInput": {
+ "type": "structure",
+ "members": {
+ "Tags": {
+ "target": "com.amazonaws.backupsearch#TagMap",
+ "traits": {
+ "smithy.api#documentation": "List of tags returned by the operation.
"
+ }
+ },
+ "Name": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Include alphanumeric characters to create a \n name for this search job.
",
+ "smithy.api#length": {
+ "max": 500
+ }
+ }
+ },
+ "EncryptionKeyArn": {
+ "target": "com.amazonaws.backupsearch#EncryptionKeyArn",
+ "traits": {
+ "smithy.api#documentation": "The encryption key for the specified \n search job.
"
+ }
+ },
+ "ClientToken": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Include this parameter to allow multiple identical \n calls for idempotency.
\n A client token is valid for 8 hours after the first \n request that uses it is completed. After this time,\n any request with the same token is treated as a \n new request.
"
+ }
+ },
+ "SearchScope": {
+ "target": "com.amazonaws.backupsearch#SearchScope",
+ "traits": {
+ "smithy.api#documentation": "This object can contain BackupResourceTypes, \n BackupResourceArns, BackupResourceCreationTime, \n BackupResourceTags, and SourceResourceArns to \n filter the recovery points returned by the search \n job.
",
+ "smithy.api#required": {}
+ }
+ },
+ "ItemFilters": {
+ "target": "com.amazonaws.backupsearch#ItemFilters",
+ "traits": {
+ "smithy.api#documentation": "Item Filters represent all input item \n properties specified when the search was \n created.
\n Contains either EBSItemFilters or \n S3ItemFilters
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backupsearch#StartSearchJobOutput": {
+ "type": "structure",
+ "members": {
+ "SearchJobArn": {
+ "target": "com.amazonaws.backupsearch#SearchJobArn",
+ "traits": {
+ "smithy.api#documentation": "The unique string that identifies the Amazon Resource \n Name (ARN) of the specified search job.
"
+ }
+ },
+ "CreationTime": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "The date and time that a job was created, in Unix format and Coordinated\n Universal Time (UTC). The value of CompletionTime
is accurate to milliseconds.\n For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087\n AM.
"
+ }
+ },
+ "SearchJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId",
+ "traits": {
+ "smithy.api#documentation": "The unique string that specifies the search job.
",
+ "smithy.api#notProperty": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.backupsearch#StartSearchResultExportJob": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backupsearch#StartSearchResultExportJobInput"
+ },
+ "output": {
+ "target": "com.amazonaws.backupsearch#StartSearchResultExportJobOutput"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backupsearch#ConflictException"
+ },
+ {
+ "target": "com.amazonaws.backupsearch#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.backupsearch#ServiceQuotaExceededException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "This operations starts a job to export the results \n of search job to a designated S3 bucket.
",
+ "smithy.api#http": {
+ "code": 200,
+ "method": "PUT",
+ "uri": "/export-search-jobs"
+ },
+ "smithy.api#idempotent": {}
+ }
+ },
+ "com.amazonaws.backupsearch#StartSearchResultExportJobInput": {
+ "type": "structure",
+ "members": {
+ "SearchJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId",
+ "traits": {
+ "smithy.api#documentation": "The unique string that specifies the search job.
",
+ "smithy.api#required": {}
+ }
+ },
+ "ExportSpecification": {
+ "target": "com.amazonaws.backupsearch#ExportSpecification",
+ "traits": {
+ "smithy.api#documentation": "This specification contains a required string of the \n destination bucket; optionally, you can include the \n destination prefix.
",
+ "smithy.api#required": {}
+ }
+ },
+ "ClientToken": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Include this parameter to allow multiple identical \n calls for idempotency.
\n A client token is valid for 8 hours after the first \n request that uses it is completed. After this time,\n any request with the same token is treated as a \n new request.
"
+ }
+ },
+ "Tags": {
+ "target": "com.amazonaws.backupsearch#TagMap",
+ "traits": {
+ "smithy.api#documentation": "Optional tags to include. A tag is a key-value pair you can use to manage, \n filter, and search for your resources. Allowed characters include UTF-8 letters, \n numbers, spaces, and the following characters: + - = . _ : /.
"
+ }
+ },
+ "RoleArn": {
+ "target": "com.amazonaws.backupsearch#IamRoleArn",
+ "traits": {
+ "smithy.api#documentation": "This parameter specifies the role ARN used to start \n the search results export jobs.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backupsearch#StartSearchResultExportJobOutput": {
+ "type": "structure",
+ "members": {
+ "ExportJobArn": {
+ "target": "com.amazonaws.backupsearch#ExportJobArn",
+ "traits": {
+ "smithy.api#documentation": "This is the unique ARN (Amazon Resource Name) that \n belongs to the new export job.
"
+ }
+ },
+ "ExportJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId",
+ "traits": {
+ "smithy.api#documentation": "This is the unique identifier that \n specifies the new export job.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.backupsearch#StopSearchJob": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backupsearch#StopSearchJobInput"
+ },
+ "output": {
+ "target": "com.amazonaws.backupsearch#StopSearchJobOutput"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backupsearch#ConflictException"
+ },
+ {
+ "target": "com.amazonaws.backupsearch#ResourceNotFoundException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "This operations ends a search job.
\n Only a search job with a status of RUNNING
\n can be stopped.
",
+ "smithy.api#http": {
+ "code": 200,
+ "method": "PUT",
+ "uri": "/search-jobs/{SearchJobIdentifier}/actions/cancel"
+ },
+ "smithy.api#idempotent": {}
+ }
+ },
+ "com.amazonaws.backupsearch#StopSearchJobInput": {
+ "type": "structure",
+ "members": {
+ "SearchJobIdentifier": {
+ "target": "com.amazonaws.backupsearch#GenericId",
+ "traits": {
+ "smithy.api#documentation": "The unique string that specifies the search job.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backupsearch#StopSearchJobOutput": {
+ "type": "structure",
+ "members": {},
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.backupsearch#StringCondition": {
+ "type": "structure",
+ "members": {
+ "Value": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The value of the string.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Operator": {
+ "target": "com.amazonaws.backupsearch#StringConditionOperator",
+ "traits": {
+ "smithy.api#default": "EQUALS_TO",
+ "smithy.api#documentation": "A string that defines what values will be \n returned.
\n If this is included, avoid combinations of \n operators that will return all possible values. \n For example, including both EQUALS_TO
\n and NOT_EQUALS_TO
with a value of 4
\n will return all values.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "This contains the value of the string and can contain \n one or more operators.
"
+ }
+ },
+ "com.amazonaws.backupsearch#StringConditionList": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.backupsearch#StringCondition"
+ },
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 10
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#StringConditionOperator": {
+ "type": "enum",
+ "members": {
+ "EQUALS_TO": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "EQUALS_TO"
+ }
+ },
+ "NOT_EQUALS_TO": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "NOT_EQUALS_TO"
+ }
+ },
+ "CONTAINS": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "CONTAINS"
+ }
+ },
+ "DOES_NOT_CONTAIN": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "DOES_NOT_CONTAIN"
+ }
+ },
+ "BEGINS_WITH": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "BEGINS_WITH"
+ }
+ },
+ "ENDS_WITH": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "ENDS_WITH"
+ }
+ },
+ "DOES_NOT_BEGIN_WITH": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "DOES_NOT_BEGIN_WITH"
+ }
+ },
+ "DOES_NOT_END_WITH": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "DOES_NOT_END_WITH"
+ }
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#TagKeys": {
+ "type": "list",
+ "member": {
+ "target": "smithy.api#String"
+ }
+ },
+ "com.amazonaws.backupsearch#TagMap": {
+ "type": "map",
+ "key": {
+ "target": "smithy.api#String"
+ },
+ "value": {
+ "target": "smithy.api#String"
+ },
+ "traits": {
+ "smithy.api#sparse": {}
+ }
+ },
+ "com.amazonaws.backupsearch#TagResource": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backupsearch#TagResourceRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.backupsearch#TagResourceResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backupsearch#ResourceNotFoundException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "This operation puts tags on the resource you indicate.
",
+ "smithy.api#http": {
+ "uri": "/tags/{ResourceArn}",
+ "method": "POST"
+ },
+ "smithy.api#idempotent": {}
+ }
+ },
+ "com.amazonaws.backupsearch#TagResourceRequest": {
+ "type": "structure",
+ "members": {
+ "ResourceArn": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The Amazon Resource Name (ARN) that uniquely identifies \n the resource.
\n This is the resource that will have the indicated tags.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "Tags": {
+ "target": "com.amazonaws.backupsearch#TagMap",
+ "traits": {
+ "smithy.api#documentation": "Required tags to include. A tag is a key-value pair you can use to manage, \n filter, and search for your resources. Allowed characters include UTF-8 letters, \n numbers, spaces, and the following characters: + - = . _ : /.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backupsearch#TagResourceResponse": {
+ "type": "structure",
+ "members": {},
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ThrottlingException": {
+ "type": "structure",
+ "members": {
+ "message": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "Request was unsuccessful due to request throttling.
",
+ "smithy.api#required": {}
+ }
+ },
+ "serviceCode": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "This is the code unique to the originating service.
"
+ }
+ },
+ "quotaCode": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "This is the code unique to the originating service with the quota.
"
+ }
+ },
+ "retryAfterSeconds": {
+ "target": "smithy.api#Integer",
+ "traits": {
+ "smithy.api#documentation": "Retry the call after number of seconds.
",
+ "smithy.api#httpHeader": "Retry-After"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The request was denied due to request throttling.
",
+ "smithy.api#error": "client",
+ "smithy.api#httpError": 429,
+ "smithy.api#retryable": {
+ "throttling": true
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#TimeCondition": {
+ "type": "structure",
+ "members": {
+ "Value": {
+ "target": "smithy.api#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "This is the timestamp value of the time condition.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Operator": {
+ "target": "com.amazonaws.backupsearch#TimeConditionOperator",
+ "traits": {
+ "smithy.api#default": "EQUALS_TO",
+ "smithy.api#documentation": "A string that defines what values will be \n returned.
\n If this is included, avoid combinations of \n operators that will return all possible values. \n For example, including both EQUALS_TO
\n and NOT_EQUALS_TO
with a value of 4
\n will return all values.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "A time condition denotes a creation time, last modification time, \n or other time.
"
+ }
+ },
+ "com.amazonaws.backupsearch#TimeConditionList": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.backupsearch#TimeCondition"
+ },
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 10
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#TimeConditionOperator": {
+ "type": "enum",
+ "members": {
+ "EQUALS_TO": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "EQUALS_TO"
+ }
+ },
+ "NOT_EQUALS_TO": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "NOT_EQUALS_TO"
+ }
+ },
+ "LESS_THAN_EQUAL_TO": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "LESS_THAN_EQUAL_TO"
+ }
+ },
+ "GREATER_THAN_EQUAL_TO": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "GREATER_THAN_EQUAL_TO"
+ }
+ }
+ }
+ },
+ "com.amazonaws.backupsearch#UntagResource": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.backupsearch#UntagResourceRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.backupsearch#UntagResourceResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.backupsearch#ResourceNotFoundException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "This operation removes tags from the specified resource.
",
+ "smithy.api#http": {
+ "uri": "/tags/{ResourceArn}",
+ "method": "DELETE"
+ },
+ "smithy.api#idempotent": {}
+ }
+ },
+ "com.amazonaws.backupsearch#UntagResourceRequest": {
+ "type": "structure",
+ "members": {
+ "ResourceArn": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The Amazon Resource Name (ARN) that uniquely identifies \n the resource where you want to remove tags.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "TagKeys": {
+ "target": "com.amazonaws.backupsearch#TagKeys",
+ "traits": {
+ "smithy.api#documentation": "This required parameter contains the tag keys you \n want to remove from the source.
",
+ "smithy.api#httpQuery": "tagKeys",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.backupsearch#UntagResourceResponse": {
+ "type": "structure",
+ "members": {},
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.backupsearch#ValidationException": {
+ "type": "structure",
+ "members": {
+ "message": {
+ "target": "smithy.api#String",
+ "traits": {
+ "smithy.api#documentation": "The input fails to satisfy the constraints specified by an Amazon service.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The input fails to satisfy the constraints specified by a service.
",
+ "smithy.api#error": "client",
+ "smithy.api#httpError": 400
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/codegen/sdk-codegen/aws-models/batch.json b/codegen/sdk-codegen/aws-models/batch.json
index b7de037903c..7319b98ad97 100644
--- a/codegen/sdk-codegen/aws-models/batch.json
+++ b/codegen/sdk-codegen/aws-models/batch.json
@@ -1810,27 +1810,27 @@
"allocationStrategy": {
"target": "com.amazonaws.batch#CRAllocationStrategy",
"traits": {
- "smithy.api#documentation": "The allocation strategy to use for the compute resource if not enough instances of the best\n fitting instance type can be allocated. This might be because of availability of the instance\n type in the Region or Amazon EC2 service limits. For more\n information, see Allocation strategies in the Batch User Guide.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n \n \n - BEST_FIT (default)
\n - \n
Batch selects an instance type that best fits the needs of the jobs with a preference\n for the lowest-cost instance type. If additional instances of the selected instance type\n aren't available, Batch waits for the additional instances to be available. If there aren't\n enough instances available or the user is reaching Amazon EC2 service limits,\n additional jobs aren't run until the currently running jobs are completed. This allocation\n strategy keeps costs lower but can limit scaling. If you're using Spot Fleets with\n BEST_FIT
, the Spot Fleet IAM Role must be specified. Compute resources that use\n a BEST_FIT
allocation strategy don't support infrastructure updates and can't\n update some parameters. For more information, see Updating compute environments in\n the Batch User Guide.
\n \n - BEST_FIT_PROGRESSIVE
\n - \n
Batch selects additional instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types with lower cost vCPUs. If\n additional instances of the previously selected instance types aren't available, Batch\n selects new instance types.
\n \n - SPOT_CAPACITY_OPTIMIZED
\n - \n
Batch selects one or more instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types that are less likely to be\n interrupted. This allocation strategy is only available for Spot Instance compute\n resources.
\n \n - SPOT_PRICE_CAPACITY_OPTIMIZED
\n - \n
The price and capacity optimized allocation strategy looks at both price and capacity to\n select the Spot Instance pools that are the least likely to be interrupted and have the lowest\n possible price. This allocation strategy is only available for Spot Instance compute\n resources.
\n \n
\n With BEST_FIT_PROGRESSIVE
,SPOT_CAPACITY_OPTIMIZED
and\n SPOT_PRICE_CAPACITY_OPTIMIZED
\n (recommended) strategies using On-Demand or Spot Instances, and the\n BEST_FIT
strategy using Spot Instances, Batch might need to exceed\n maxvCpus
to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus
by more than a single instance.
"
+ "smithy.api#documentation": "The allocation strategy to use for the compute resource if not enough instances of the best\n fitting instance type can be allocated. This might be because of availability of the instance\n type in the Region or Amazon EC2 service limits. For more\n information, see Allocation strategies in the Batch User Guide.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n \n \n - BEST_FIT (default)
\n - \n
Batch selects an instance type that best fits the needs of the jobs with a preference\n for the lowest-cost instance type. If additional instances of the selected instance type\n aren't available, Batch waits for the additional instances to be available. If there aren't\n enough instances available or the user is reaching Amazon EC2 service limits,\n additional jobs aren't run until the currently running jobs are completed. This allocation\n strategy keeps costs lower but can limit scaling. If you're using Spot Fleets with\n BEST_FIT
, the Spot Fleet IAM Role must be specified. Compute resources that use\n a BEST_FIT
allocation strategy don't support infrastructure updates and can't\n update some parameters. For more information, see Updating compute environments in\n the Batch User Guide.
\n \n - BEST_FIT_PROGRESSIVE
\n - \n
Batch selects additional instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types with lower cost vCPUs. If\n additional instances of the previously selected instance types aren't available, Batch\n selects new instance types.
\n \n - SPOT_CAPACITY_OPTIMIZED
\n - \n
Batch selects one or more instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types that are less likely to be\n interrupted. This allocation strategy is only available for Spot Instance compute\n resources.
\n \n - SPOT_PRICE_CAPACITY_OPTIMIZED
\n - \n
The price and capacity optimized allocation strategy looks at both price and capacity to\n select the Spot Instance pools that are the least likely to be interrupted and have the lowest\n possible price. This allocation strategy is only available for Spot Instance compute\n resources.
\n \n
\n With BEST_FIT_PROGRESSIVE
,SPOT_CAPACITY_OPTIMIZED
and\n SPOT_PRICE_CAPACITY_OPTIMIZED
(recommended) strategies using On-Demand or Spot \n Instances, and the BEST_FIT
strategy using Spot Instances, Batch might need to \n exceed maxvCpus
to meet your capacity requirements. In this event, Batch never \n exceeds maxvCpus
by more than a single instance.
"
}
},
"minvCpus": {
"target": "com.amazonaws.batch#Integer",
"traits": {
- "smithy.api#documentation": "The minimum number of\n vCPUs that\n a\n compute\n environment should maintain (even if the compute environment is DISABLED
).
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
+ "smithy.api#documentation": "The minimum number of vCPUs that a compute environment should maintain (even if the compute \n environment is DISABLED
).
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
}
},
"maxvCpus": {
"target": "com.amazonaws.batch#Integer",
"traits": {
"smithy.api#clientOptional": {},
- "smithy.api#documentation": "The maximum number of\n vCPUs that a\n compute environment can\n support.
\n \n With BEST_FIT_PROGRESSIVE
,SPOT_CAPACITY_OPTIMIZED
and\n SPOT_PRICE_CAPACITY_OPTIMIZED
\n (recommended) strategies using On-Demand or Spot Instances, and the\n BEST_FIT
strategy using Spot Instances, Batch might need to exceed\n maxvCpus
to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus
by more than a single instance.
\n ",
+ "smithy.api#documentation": "The maximum number of vCPUs that a compute environment can support.
\n \n With BEST_FIT_PROGRESSIVE
,SPOT_CAPACITY_OPTIMIZED
and\n SPOT_PRICE_CAPACITY_OPTIMIZED
(recommended) strategies using On-Demand or Spot Instances, \n and the BEST_FIT
strategy using Spot Instances, Batch might need to exceed\n maxvCpus
to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus
by more than a single instance.
\n ",
"smithy.api#required": {}
}
},
"desiredvCpus": {
"target": "com.amazonaws.batch#Integer",
"traits": {
- "smithy.api#documentation": "The desired number of\n vCPUS in the\n compute environment. Batch modifies this value between the minimum and maximum values based on\n job queue demand.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
+ "smithy.api#documentation": "The desired number of vCPUS in the compute environment. Batch modifies this value between \n the minimum and maximum values based on job queue demand.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
}
},
"instanceTypes": {
@@ -1889,7 +1889,7 @@
"bidPercentage": {
"target": "com.amazonaws.batch#Integer",
"traits": {
- "smithy.api#documentation": "The maximum percentage that a Spot Instance price can be when compared with the On-Demand\n price for that instance type before instances are launched. For example, if your maximum\n percentage is 20%, then the Spot price must be less than 20% of the current On-Demand price for\n that Amazon EC2 instance. You always pay the lowest (market) price and never more than your maximum\n percentage. If you leave this field empty, the default value is 100% of the On-Demand\n price. For most use cases,\n we recommend leaving this field empty.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
+ "smithy.api#documentation": "The maximum percentage that a Spot Instance price can be when compared with the On-Demand\n price for that instance type before instances are launched. For example, if your maximum\n percentage is 20%, then the Spot price must be less than 20% of the current On-Demand price for\n that Amazon EC2 instance. You always pay the lowest (market) price and never more than your maximum\n percentage. If you leave this field empty, the default value is 100% of the On-Demand\n price. For most use cases, we recommend leaving this field empty.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
}
},
"spotIamFleetRole": {
@@ -1921,19 +1921,19 @@
"minvCpus": {
"target": "com.amazonaws.batch#Integer",
"traits": {
- "smithy.api#documentation": "The minimum number of\n vCPUs that\n an environment should maintain (even if the compute environment is DISABLED
).
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
+ "smithy.api#documentation": "The minimum number of vCPUs that an environment should maintain (even if the compute environment \n is DISABLED
).
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
}
},
"maxvCpus": {
"target": "com.amazonaws.batch#Integer",
"traits": {
- "smithy.api#documentation": "The maximum number of Amazon EC2 vCPUs that an environment can reach.
\n \n With BEST_FIT_PROGRESSIVE
,SPOT_CAPACITY_OPTIMIZED
and\n SPOT_PRICE_CAPACITY_OPTIMIZED
\n (recommended) strategies using On-Demand or Spot Instances, and the\n BEST_FIT
strategy using Spot Instances, Batch might need to exceed\n maxvCpus
to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus
by more than a single instance.
\n "
+ "smithy.api#documentation": "The maximum number of Amazon EC2 vCPUs that an environment can reach.
\n \n With BEST_FIT_PROGRESSIVE
,SPOT_CAPACITY_OPTIMIZED
and\n SPOT_PRICE_CAPACITY_OPTIMIZED
(recommended) strategies using On-Demand or Spot \n Instances, and the BEST_FIT
strategy using Spot Instances, Batch might need to \n exceed maxvCpus
to meet your capacity requirements. In this event, Batch never \n exceeds maxvCpus
by more than a single instance.
\n "
}
},
"desiredvCpus": {
"target": "com.amazonaws.batch#Integer",
"traits": {
- "smithy.api#documentation": "The desired number of\n vCPUS in the\n compute environment. Batch modifies this value between the minimum and maximum values based on\n job queue demand.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n \n \n Batch doesn't support changing the desired number of vCPUs of an existing compute\n environment. Don't specify this parameter for compute environments using Amazon EKS clusters.
\n \n \n When you update the desiredvCpus
setting, the value must be between the\n minvCpus
and maxvCpus
values.
\n Additionally, the updated desiredvCpus
value must be greater than or equal to\n the current desiredvCpus
value. For more information, see Troubleshooting\n Batch in the Batch User Guide.
\n "
+ "smithy.api#documentation": "The desired number of vCPUS in the compute environment. Batch modifies this value between \n the minimum and maximum values based on job queue demand.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n \n \n Batch doesn't support changing the desired number of vCPUs of an existing compute\n environment. Don't specify this parameter for compute environments using Amazon EKS clusters.
\n \n \n When you update the desiredvCpus
setting, the value must be between the\n minvCpus
and maxvCpus
values.
\n Additionally, the updated desiredvCpus
value must be greater than or equal to\n the current desiredvCpus
value. For more information, see Troubleshooting\n Batch in the Batch User Guide.
\n "
}
},
"subnets": {
@@ -1951,7 +1951,7 @@
"allocationStrategy": {
"target": "com.amazonaws.batch#CRUpdateAllocationStrategy",
"traits": {
- "smithy.api#documentation": "The allocation strategy to use for the compute resource if there's not enough instances of\n the best fitting instance type that can be allocated. This might be because of availability of\n the instance type in the Region or Amazon EC2 service limits. For more\n information, see Allocation strategies in the Batch User Guide.
\n When updating a compute environment, changing the allocation strategy requires an\n infrastructure update of the compute environment. For more information, see Updating compute\n environments in the Batch User Guide. BEST_FIT
isn't\n supported when updating a compute environment.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n \n \n - BEST_FIT_PROGRESSIVE
\n - \n
Batch selects additional instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types with lower cost vCPUs. If\n additional instances of the previously selected instance types aren't available, Batch\n selects new instance types.
\n \n - SPOT_CAPACITY_OPTIMIZED
\n - \n
Batch selects one or more instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types that are less likely to be\n interrupted. This allocation strategy is only available for Spot Instance compute\n resources.
\n \n - SPOT_PRICE_CAPACITY_OPTIMIZED
\n - \n
The price and capacity optimized allocation strategy looks at both price and capacity to\n select the Spot Instance pools that are the least likely to be interrupted and have the lowest\n possible price. This allocation strategy is only available for Spot Instance compute\n resources.
\n \n
\n With BEST_FIT_PROGRESSIVE
,SPOT_CAPACITY_OPTIMIZED
and\n SPOT_PRICE_CAPACITY_OPTIMIZED
\n (recommended) strategies using On-Demand or Spot Instances, and the\n BEST_FIT
strategy using Spot Instances, Batch might need to exceed\n maxvCpus
to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus
by more than a single instance.
"
+ "smithy.api#documentation": "The allocation strategy to use for the compute resource if there's not enough instances of\n the best fitting instance type that can be allocated. This might be because of availability of\n the instance type in the Region or Amazon EC2 service limits. For more\n information, see Allocation strategies in the Batch User Guide.
\n When updating a compute environment, changing the allocation strategy requires an\n infrastructure update of the compute environment. For more information, see Updating compute\n environments in the Batch User Guide. BEST_FIT
isn't\n supported when updating a compute environment.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n \n \n - BEST_FIT_PROGRESSIVE
\n - \n
Batch selects additional instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types with lower cost vCPUs. If\n additional instances of the previously selected instance types aren't available, Batch\n selects new instance types.
\n \n - SPOT_CAPACITY_OPTIMIZED
\n - \n
Batch selects one or more instance types that are large enough to meet the requirements\n of the jobs in the queue. Its preference is for instance types that are less likely to be\n interrupted. This allocation strategy is only available for Spot Instance compute\n resources.
\n \n - SPOT_PRICE_CAPACITY_OPTIMIZED
\n - \n
The price and capacity optimized allocation strategy looks at both price and capacity to\n select the Spot Instance pools that are the least likely to be interrupted and have the lowest\n possible price. This allocation strategy is only available for Spot Instance compute\n resources.
\n \n
\n With BEST_FIT_PROGRESSIVE
,SPOT_CAPACITY_OPTIMIZED
and\n SPOT_PRICE_CAPACITY_OPTIMIZED
(recommended) strategies using On-Demand or Spot Instances, \n and the BEST_FIT
strategy using Spot Instances, Batch might need to exceed\n maxvCpus
to meet your capacity requirements. In this event, Batch never exceeds\n maxvCpus
by more than a single instance.
"
}
},
"instanceTypes": {
@@ -1969,7 +1969,7 @@
"instanceRole": {
"target": "com.amazonaws.batch#String",
"traits": {
- "smithy.api#documentation": "The Amazon ECS instance profile applied to Amazon EC2 instances in a compute environment.\n Required for Amazon EC2\n instances. You can specify the short name or full Amazon Resource Name (ARN) of an instance\n profile. For example, \n ecsInstanceRole\n
or\n arn:aws:iam:::instance-profile/ecsInstanceRole\n
.\n For more information, see Amazon ECS instance role in the Batch User Guide.
\n When updating a compute environment, changing this setting requires an infrastructure update\n of the compute environment. For more information, see Updating compute environments in the\n Batch User Guide.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
+ "smithy.api#documentation": "The Amazon ECS instance profile applied to Amazon EC2 instances in a compute environment.\n Required for Amazon EC2 instances. You can specify the short name or full Amazon Resource Name (ARN) of an instance\n profile. For example, \n ecsInstanceRole\n
or\n arn:aws:iam:::instance-profile/ecsInstanceRole\n
.\n For more information, see Amazon ECS instance role in the Batch User Guide.
\n When updating a compute environment, changing this setting requires an infrastructure update\n of the compute environment. For more information, see Updating compute environments in the\n Batch User Guide.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
}
},
"tags": {
@@ -1987,7 +1987,7 @@
"bidPercentage": {
"target": "com.amazonaws.batch#Integer",
"traits": {
- "smithy.api#documentation": "The maximum percentage that a Spot Instance price can be when compared with the On-Demand\n price for that instance type before instances are launched. For example, if your maximum\n percentage is 20%, the Spot price must be less than 20% of the current On-Demand price for that\n Amazon EC2 instance. You always pay the lowest (market) price and never more than your maximum\n percentage. For most use\n cases, we recommend leaving this field empty.
\n When updating a compute environment, changing the bid percentage requires an infrastructure\n update of the compute environment. For more information, see Updating compute environments in the\n Batch User Guide.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
+ "smithy.api#documentation": "The maximum percentage that a Spot Instance price can be when compared with the On-Demand\n price for that instance type before instances are launched. For example, if your maximum\n percentage is 20%, the Spot price must be less than 20% of the current On-Demand price for that\n Amazon EC2 instance. You always pay the lowest (market) price and never more than your maximum\n percentage. For most use cases, we recommend leaving this field empty.
\n When updating a compute environment, changing the bid percentage requires an infrastructure\n update of the compute environment. For more information, see Updating compute environments in the\n Batch User Guide.
\n \n This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
\n "
}
},
"launchTemplate": {
@@ -2061,7 +2061,7 @@
"executionRoleArn": {
"target": "com.amazonaws.batch#String",
"traits": {
- "smithy.api#documentation": "The Amazon Resource Name (ARN) of the\n execution\n role that Batch can assume. For more information,\n see Batch execution IAM\n role in the Batch User Guide.
"
+ "smithy.api#documentation": "The Amazon Resource Name (ARN) of the execution role that Batch can assume. For more information,\n see Batch execution IAM\n role in the Batch User Guide.
"
}
},
"volumes": {
@@ -2263,7 +2263,7 @@
"image": {
"target": "com.amazonaws.batch#String",
"traits": {
- "smithy.api#documentation": "Required.\n The image used to start a container. This string is passed directly to the\n Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are\n specified with\n \n repository-url/image:tag\n
.\n It can be 255 characters long. It can contain uppercase and lowercase letters, numbers,\n hyphens (-), underscores (_), colons (:), periods (.), forward slashes (/), and number signs (#). This parameter maps to Image
in the\n Create a container section of the Docker Remote API and the IMAGE
\n parameter of docker run.
\n \n Docker image architecture must match the processor architecture of the compute resources\n that they're scheduled on. For example, ARM-based Docker images can only run on ARM-based\n compute resources.
\n \n \n - \n
Images in Amazon ECR Public repositories use the full registry/repository[:tag]
or\n registry/repository[@digest]
naming conventions. For example,\n public.ecr.aws/registry_alias/my-web-app:latest\n
.
\n \n - \n
Images in Amazon ECR repositories use the full registry and repository URI (for example,\n 123456789012.dkr.ecr..amazonaws.com/
).
\n \n - \n
Images in official repositories on Docker Hub use a single name (for example,\n ubuntu
or mongo
).
\n \n - \n
Images in other repositories on Docker Hub are qualified with an organization name (for\n example, amazon/amazon-ecs-agent
).
\n \n - \n
Images in other online repositories are qualified further by a domain name (for example,\n quay.io/assemblyline/ubuntu
).
\n \n
"
+ "smithy.api#documentation": "Required. The image used to start a container. This string is passed directly to the\n Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are\n specified with\n \n repository-url/image:tag\n
.\n It can be 255 characters long. It can contain uppercase and lowercase letters, numbers,\n hyphens (-), underscores (_), colons (:), periods (.), forward slashes (/), and number signs (#). This parameter maps to Image
in the\n Create a container section of the Docker Remote API and the IMAGE
\n parameter of docker run.
\n \n Docker image architecture must match the processor architecture of the compute resources\n that they're scheduled on. For example, ARM-based Docker images can only run on ARM-based\n compute resources.
\n \n \n - \n
Images in Amazon ECR Public repositories use the full registry/repository[:tag]
or\n registry/repository[@digest]
naming conventions. For example,\n public.ecr.aws/registry_alias/my-web-app:latest\n
.
\n \n - \n
Images in Amazon ECR repositories use the full registry and repository URI (for example,\n 123456789012.dkr.ecr..amazonaws.com/
).
\n \n - \n
Images in official repositories on Docker Hub use a single name (for example,\n ubuntu
or mongo
).
\n \n - \n
Images in other repositories on Docker Hub are qualified with an organization name (for\n example, amazon/amazon-ecs-agent
).
\n \n - \n
Images in other online repositories are qualified further by a domain name (for example,\n quay.io/assemblyline/ubuntu
).
\n \n
"
}
},
"vcpus": {
@@ -2406,7 +2406,7 @@
}
},
"traits": {
- "smithy.api#documentation": "Container properties are used\n for\n Amazon ECS based job definitions. These properties to describe the container that's\n launched as part of a job.
"
+ "smithy.api#documentation": "Container properties are used for Amazon ECS based job definitions. These properties to describe the \n container that's launched as part of a job.
"
}
},
"com.amazonaws.batch#ContainerSummary": {
@@ -2446,7 +2446,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Creates an Batch compute environment. You can create MANAGED
or\n UNMANAGED
compute environments. MANAGED
compute environments can\n use Amazon EC2 or Fargate resources. UNMANAGED
compute environments can only use\n EC2 resources.
\n In a managed compute environment, Batch manages the capacity and instance types of the\n compute resources within the environment. This is based on the compute resource specification\n that you define or the launch template that you\n specify when you create the compute environment. Either, you can choose to use EC2 On-Demand\n Instances and EC2 Spot Instances. Or, you can use Fargate and Fargate Spot capacity in\n your managed compute environment. You can optionally set a maximum price so that Spot\n Instances only launch when the Spot Instance price is less than a specified percentage of the\n On-Demand price.
\n \n Multi-node parallel jobs aren't supported on Spot Instances.
\n \n In an unmanaged compute environment, you can manage your own EC2 compute resources and\n have flexibility with how you configure your compute resources. For example, you can use\n custom AMIs. However, you must verify that each of your AMIs meet the Amazon ECS container instance\n AMI specification. For more information, see container instance AMIs in the\n Amazon Elastic Container Service Developer Guide. After you created your unmanaged compute environment,\n you can use the DescribeComputeEnvironments operation to find the Amazon ECS\n cluster that's associated with it. Then, launch your container instances into that Amazon ECS\n cluster. For more information, see Launching an Amazon ECS container\n instance in the Amazon Elastic Container Service Developer Guide.
\n \n To create a compute environment that uses EKS resources, the caller must have\n permissions to call eks:DescribeCluster
.
\n \n \n Batch doesn't automatically upgrade the AMIs in a compute environment after it's\n created. For example, it also doesn't update the AMIs in your compute environment when a\n newer version of the Amazon ECS optimized AMI is available. You're responsible for the management\n of the guest operating system. This includes any updates and security patches. You're also\n responsible for any additional application software or utilities that you install on the\n compute resources. There are two ways to use a new AMI for your Batch jobs. The original\n method is to complete these steps:
\n \n - \n
Create a new compute environment with the new AMI.
\n \n - \n
Add the compute environment to an existing job queue.
\n \n - \n
Remove the earlier compute environment from your job queue.
\n \n - \n
Delete the earlier compute environment.
\n \n
\n In April 2022, Batch added enhanced support for updating compute environments. For\n more information, see Updating compute environments.\n To use the enhanced updating of compute environments to update AMIs, follow these\n rules:
\n \n - \n
Either don't set the service role (serviceRole
) parameter or set it to\n the AWSBatchServiceRole service-linked role.
\n \n - \n
Set the allocation strategy (allocationStrategy
) parameter to\n BEST_FIT_PROGRESSIVE
, SPOT_CAPACITY_OPTIMIZED
, or\n SPOT_PRICE_CAPACITY_OPTIMIZED
.
\n \n - \n
Set the update to latest image version (updateToLatestImageVersion
)\n parameter to\n true
.\n The updateToLatestImageVersion
parameter is used when you update a compute\n environment. This parameter is ignored when you create a compute\n environment.
\n \n - \n
Don't specify an AMI ID in imageId
, imageIdOverride
(in\n \n ec2Configuration
\n ), or in the launch template\n (launchTemplate
). In that case, Batch selects the latest Amazon ECS\n optimized AMI that's supported by Batch at the time the infrastructure update is\n initiated. Alternatively, you can specify the AMI ID in the imageId
or\n imageIdOverride
parameters, or the launch template identified by the\n LaunchTemplate
properties. Changing any of these properties starts an\n infrastructure update. If the AMI ID is specified in the launch template, it can't be\n replaced by specifying an AMI ID in either the imageId
or\n imageIdOverride
parameters. It can only be replaced by specifying a\n different launch template, or if the launch template version is set to\n $Default
or $Latest
, by setting either a new default version\n for the launch template (if $Default
) or by adding a new version to the\n launch template (if $Latest
).
\n \n
\n If these rules are followed, any update that starts an infrastructure update causes the\n AMI ID to be re-selected. If the version
setting in the launch template\n (launchTemplate
) is set to $Latest
or $Default
, the\n latest or default version of the launch template is evaluated up at the time of the\n infrastructure update, even if the launchTemplate
wasn't updated.
\n ",
+ "smithy.api#documentation": "Creates an Batch compute environment. You can create MANAGED
or\n UNMANAGED
compute environments. MANAGED
compute environments can\n use Amazon EC2 or Fargate resources. UNMANAGED
compute environments can only use\n EC2 resources.
\n In a managed compute environment, Batch manages the capacity and instance types of the\n compute resources within the environment. This is based on the compute resource specification\n that you define or the launch template that you\n specify when you create the compute environment. Either, you can choose to use EC2 On-Demand\n Instances and EC2 Spot Instances. Or, you can use Fargate and Fargate Spot capacity in\n your managed compute environment. You can optionally set a maximum price so that Spot\n Instances only launch when the Spot Instance price is less than a specified percentage of the\n On-Demand price.
\n \n Multi-node parallel jobs aren't supported on Spot Instances.
\n \n In an unmanaged compute environment, you can manage your own EC2 compute resources and\n have flexibility with how you configure your compute resources. For example, you can use\n custom AMIs. However, you must verify that each of your AMIs meet the Amazon ECS container instance\n AMI specification. For more information, see container instance AMIs in the\n Amazon Elastic Container Service Developer Guide. After you created your unmanaged compute environment,\n you can use the DescribeComputeEnvironments operation to find the Amazon ECS\n cluster that's associated with it. Then, launch your container instances into that Amazon ECS\n cluster. For more information, see Launching an Amazon ECS container\n instance in the Amazon Elastic Container Service Developer Guide.
\n \n To create a compute environment that uses EKS resources, the caller must have\n permissions to call eks:DescribeCluster
.
\n \n \n Batch doesn't automatically upgrade the AMIs in a compute environment after it's\n created. For example, it also doesn't update the AMIs in your compute environment when a\n newer version of the Amazon ECS optimized AMI is available. You're responsible for the management\n of the guest operating system. This includes any updates and security patches. You're also\n responsible for any additional application software or utilities that you install on the\n compute resources. There are two ways to use a new AMI for your Batch jobs. The original\n method is to complete these steps:
\n \n - \n
Create a new compute environment with the new AMI.
\n \n - \n
Add the compute environment to an existing job queue.
\n \n - \n
Remove the earlier compute environment from your job queue.
\n \n - \n
Delete the earlier compute environment.
\n \n
\n In April 2022, Batch added enhanced support for updating compute environments. For\n more information, see Updating compute environments.\n To use the enhanced updating of compute environments to update AMIs, follow these\n rules:
\n \n - \n
Either don't set the service role (serviceRole
) parameter or set it to\n the AWSBatchServiceRole service-linked role.
\n \n - \n
Set the allocation strategy (allocationStrategy
) parameter to\n BEST_FIT_PROGRESSIVE
, SPOT_CAPACITY_OPTIMIZED
, or\n SPOT_PRICE_CAPACITY_OPTIMIZED
.
\n \n - \n
Set the update to latest image version (updateToLatestImageVersion
)\n parameter to true
. The updateToLatestImageVersion
parameter \n is used when you update a compute environment. This parameter is ignored when you create \n a compute environment.
\n \n - \n
Don't specify an AMI ID in imageId
, imageIdOverride
(in\n \n ec2Configuration
\n ), or in the launch template\n (launchTemplate
). In that case, Batch selects the latest Amazon ECS\n optimized AMI that's supported by Batch at the time the infrastructure update is\n initiated. Alternatively, you can specify the AMI ID in the imageId
or\n imageIdOverride
parameters, or the launch template identified by the\n LaunchTemplate
properties. Changing any of these properties starts an\n infrastructure update. If the AMI ID is specified in the launch template, it can't be\n replaced by specifying an AMI ID in either the imageId
or\n imageIdOverride
parameters. It can only be replaced by specifying a\n different launch template, or if the launch template version is set to\n $Default
or $Latest
, by setting either a new default version\n for the launch template (if $Default
) or by adding a new version to the\n launch template (if $Latest
).
\n \n
\n If these rules are followed, any update that starts an infrastructure update causes the\n AMI ID to be re-selected. If the version
setting in the launch template\n (launchTemplate
) is set to $Latest
or $Default
, the\n latest or default version of the launch template is evaluated up at the time of the\n infrastructure update, even if the launchTemplate
wasn't updated.
\n ",
"smithy.api#examples": [
{
"title": "To create a managed EC2 compute environment",
@@ -3989,6 +3989,15 @@
"smithy.api#documentation": "The properties for a task definition that describes the container and volume definitions of\n an Amazon ECS task. You can specify which Docker images to use, the required resources, and other\n configurations related to launching the task definition through an Amazon ECS service or task.
"
}
},
+ "com.amazonaws.batch#EksAnnotationsMap": {
+ "type": "map",
+ "key": {
+ "target": "com.amazonaws.batch#String"
+ },
+ "value": {
+ "target": "com.amazonaws.batch#String"
+ }
+ },
"com.amazonaws.batch#EksAttemptContainerDetail": {
"type": "structure",
"members": {
@@ -4420,6 +4429,12 @@
"smithy.api#documentation": "The path on the container where the volume is mounted.
"
}
},
+ "subPath": {
+ "target": "com.amazonaws.batch#String",
+ "traits": {
+ "smithy.api#documentation": "A sub-path inside the referenced volume instead of its root.
"
+ }
+ },
"readOnly": {
"target": "com.amazonaws.batch#Boolean",
"traits": {
@@ -4503,10 +4518,44 @@
"traits": {
"smithy.api#documentation": "Key-value pairs used to identify, sort, and organize cube resources. Can contain up to 63\n uppercase letters, lowercase letters, numbers, hyphens (-), and underscores (_). Labels can be\n added or modified at any time. Each resource can have multiple labels, but each key must be\n unique for a given object.
"
}
+ },
+ "annotations": {
+ "target": "com.amazonaws.batch#EksAnnotationsMap",
+ "traits": {
+ "smithy.api#documentation": "Key-value pairs used to attach arbitrary, non-identifying metadata to Kubernetes objects. \n Valid annotation keys have two segments: an optional prefix and a name, separated by a \n slash (/).
\n \n - \n
The prefix is optional and must be 253 characters or less. If specified, the prefix \n must be a DNS subdomain− a series of DNS labels separated by dots (.), and it must \n end with a slash (/).
\n \n - \n
The name segment is required and must be 63 characters or less. It can include alphanumeric \n characters ([a-z0-9A-Z]), dashes (-), underscores (_), and dots (.), but must begin and end \n with an alphanumeric character.
\n \n
\n \n Annotation values must be 255 characters or less.
\n \n Annotations can be added or modified at any time. Each resource can have multiple annotations.
"
+ }
+ },
+ "namespace": {
+ "target": "com.amazonaws.batch#String",
+ "traits": {
+ "smithy.api#documentation": "The namespace of the Amazon EKS cluster. In Kubernetes, namespaces provide a mechanism for isolating \n groups of resources within a single cluster. Names of resources need to be unique within a namespace, \n but not across namespaces. Batch places Batch Job pods in this namespace. If this field is provided, \n the value can't be empty or null. It must meet the following requirements:
\n \n \n For more information, see \n Namespaces in the Kubernetes documentation. This namespace can be \n different from the kubernetesNamespace
set in the compute environment's \n EksConfiguration
, but must have identical role-based access control (RBAC) roles as \n the compute environment's kubernetesNamespace
. For multi-node parallel jobs,\n the same value must be provided across all the node ranges.
"
+ }
}
},
"traits": {
- "smithy.api#documentation": "Describes and uniquely identifies Kubernetes resources. For example, the compute environment that\n a pod runs in or the jobID
for a job running in the pod. For more information, see\n Understanding Kubernetes Objects in the Kubernetes documentation.
"
+ "smithy.api#documentation": "Describes and uniquely identifies Kubernetes resources. For example, the compute environment that\n a pod runs in or the jobID
for a job running in the pod. For more information, see\n \n Understanding Kubernetes Objects in the Kubernetes documentation.
"
+ }
+ },
+ "com.amazonaws.batch#EksPersistentVolumeClaim": {
+ "type": "structure",
+ "members": {
+ "claimName": {
+ "target": "com.amazonaws.batch#String",
+ "traits": {
+ "smithy.api#clientOptional": {},
+ "smithy.api#documentation": "The name of the persistentVolumeClaim
bounded to a persistentVolume
. \n For more information, see \n Persistent Volume Claims in the Kubernetes documentation.
",
+ "smithy.api#required": {}
+ }
+ },
+ "readOnly": {
+ "target": "com.amazonaws.batch#Boolean",
+ "traits": {
+ "smithy.api#documentation": "An optional boolean value indicating if the mount is read only. Default is false. For more\n information, see \n Read Only Mounts in the Kubernetes documentation.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "A persistentVolumeClaim
volume is used to mount a PersistentVolume\n into a Pod. PersistentVolumeClaims are a way for users to \"claim\" durable storage without knowing \n the details of the particular cloud environment. See the information about PersistentVolumes\n in the Kubernetes documentation.
"
}
},
"com.amazonaws.batch#EksPodProperties": {
@@ -4557,7 +4606,7 @@
"metadata": {
"target": "com.amazonaws.batch#EksMetadata",
"traits": {
- "smithy.api#documentation": "Metadata about the\n Kubernetes\n pod. For\n more information, see Understanding Kubernetes Objects in the Kubernetes\n documentation.
"
+ "smithy.api#documentation": "Metadata about the Kubernetes pod. For more information, see Understanding Kubernetes Objects in the Kubernetes\n documentation.
"
}
},
"shareProcessNamespace": {
@@ -4663,7 +4712,7 @@
"metadata": {
"target": "com.amazonaws.batch#EksMetadata",
"traits": {
- "smithy.api#documentation": "Metadata about the\n overrides for the container that's used on the Amazon EKS pod.
"
+ "smithy.api#documentation": "Metadata about the overrides for the container that's used on the Amazon EKS pod.
"
}
}
},
@@ -4772,6 +4821,12 @@
"traits": {
"smithy.api#documentation": "Specifies the configuration of a Kubernetes secret
volume. For more information, see\n secret in the\n Kubernetes documentation.
"
}
+ },
+ "persistentVolumeClaim": {
+ "target": "com.amazonaws.batch#EksPersistentVolumeClaim",
+ "traits": {
+ "smithy.api#documentation": "Specifies the configuration of a Kubernetes persistentVolumeClaim
bounded to a \n persistentVolume
. For more information, see \n Persistent Volume Claims in the Kubernetes documentation.
"
+ }
}
},
"traits": {
@@ -4812,7 +4867,7 @@
"onStatusReason": {
"target": "com.amazonaws.batch#String",
"traits": {
- "smithy.api#documentation": "Contains a glob pattern to match against the StatusReason
returned for a job.\n The pattern can contain up to 512 characters. It can contain letters, numbers, periods (.),\n colons (:), and white spaces (including spaces or tabs).\n It can\n optionally end with an asterisk (*) so that only the start of the string needs to be an exact\n match.
"
+ "smithy.api#documentation": "Contains a glob pattern to match against the StatusReason
returned for a job.\n The pattern can contain up to 512 characters. It can contain letters, numbers, periods (.),\n colons (:), and white spaces (including spaces or tabs). It can optionally end with an asterisk (*) \n so that only the start of the string needs to be an exact match.
"
}
},
"onReason": {
@@ -7021,13 +7076,13 @@
"operatingSystemFamily": {
"target": "com.amazonaws.batch#String",
"traits": {
- "smithy.api#documentation": "The operating system for the compute environment.\n Valid values are:\n LINUX
(default), WINDOWS_SERVER_2019_CORE
,\n WINDOWS_SERVER_2019_FULL
, WINDOWS_SERVER_2022_CORE
, and\n WINDOWS_SERVER_2022_FULL
.
\n \n The following parameters can’t be set for Windows containers: linuxParameters
,\n privileged
, user
, ulimits
,\n readonlyRootFilesystem
,\n and efsVolumeConfiguration
.
\n \n \n The Batch Scheduler checks\n the compute environments\n that are attached to the job queue before registering a task definition with\n Fargate. In this\n scenario, the job queue is where the job is submitted. If the job requires a\n Windows container and the first compute environment is LINUX
, the compute\n environment is skipped and the next compute environment is checked until a Windows-based compute\n environment is found.
\n \n \n Fargate Spot is not supported for\n ARM64
and\n Windows-based containers on Fargate. A job queue will be blocked if a\n Fargate\n ARM64
or\n Windows job is submitted to a job queue with only Fargate Spot compute environments.\n However, you can attach both FARGATE
and\n FARGATE_SPOT
compute environments to the same job queue.
\n "
+ "smithy.api#documentation": "The operating system for the compute environment. Valid values are:\n LINUX
(default), WINDOWS_SERVER_2019_CORE
,\n WINDOWS_SERVER_2019_FULL
, WINDOWS_SERVER_2022_CORE
, and\n WINDOWS_SERVER_2022_FULL
.
\n \n The following parameters can’t be set for Windows containers: linuxParameters
,\n privileged
, user
, ulimits
,\n readonlyRootFilesystem
, and efsVolumeConfiguration
.
\n \n \n The Batch Scheduler checks the compute environments that are attached to the job queue before \n registering a task definition with Fargate. In this scenario, the job queue is where the job is \n submitted. If the job requires a Windows container and the first compute environment is LINUX
, \n the compute environment is skipped and the next compute environment is checked until a Windows-based \n compute environment is found.
\n \n \n Fargate Spot is not supported for ARM64
and Windows-based containers on Fargate. \n A job queue will be blocked if a Fargate ARM64
or Windows job is submitted to a job \n queue with only Fargate Spot compute environments. However, you can attach both FARGATE
and\n FARGATE_SPOT
compute environments to the same job queue.
\n "
}
},
"cpuArchitecture": {
"target": "com.amazonaws.batch#String",
"traits": {
- "smithy.api#documentation": " The vCPU architecture. The default value is X86_64
. Valid values are\n X86_64
and ARM64
.
\n \n This parameter must be set to\n X86_64
\n for Windows containers.
\n \n \n Fargate Spot is not supported for ARM64
and Windows-based containers on\n Fargate. A job queue will be blocked if a Fargate ARM64
or Windows job is\n submitted to a job queue with only Fargate Spot compute environments. However, you can attach\n both FARGATE
and FARGATE_SPOT
compute environments to the same job\n queue.
\n "
+ "smithy.api#documentation": " The vCPU architecture. The default value is X86_64
. Valid values are\n X86_64
and ARM64
.
\n \n This parameter must be set to X86_64
for Windows containers.
\n \n \n Fargate Spot is not supported for ARM64
and Windows-based containers on\n Fargate. A job queue will be blocked if a Fargate ARM64
or Windows job is\n submitted to a job queue with only Fargate Spot compute environments. However, you can attach\n both FARGATE
and FARGATE_SPOT
compute environments to the same job\n queue.
\n "
}
}
},
@@ -7247,7 +7302,7 @@
"schedulingPriorityOverride": {
"target": "com.amazonaws.batch#Integer",
"traits": {
- "smithy.api#documentation": "The scheduling priority for the job. This only affects jobs in job queues with a fair\n share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower\n scheduling priority.\n This\n overrides any scheduling priority in the job definition and works only within a single share\n identifier.
\n The minimum supported value is 0 and the maximum supported value is 9999.
"
+ "smithy.api#documentation": "The scheduling priority for the job. This only affects jobs in job queues with a fair\n share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower\n scheduling priority. This overrides any scheduling priority in the job definition and works only \n within a single share identifier.
\n The minimum supported value is 0 and the maximum supported value is 9999.
"
}
},
"arrayProperties": {
diff --git a/codegen/sdk-codegen/aws-models/budgets.json b/codegen/sdk-codegen/aws-models/budgets.json
index 0fea2c0505a..abf52523d08 100644
--- a/codegen/sdk-codegen/aws-models/budgets.json
+++ b/codegen/sdk-codegen/aws-models/budgets.json
@@ -340,6 +340,108 @@
},
"type": "endpoint"
},
+ {
+ "conditions": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ },
+ "aws-iso"
+ ]
+ },
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ {
+ "ref": "UseFIPS"
+ },
+ false
+ ]
+ },
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ {
+ "ref": "UseDualStack"
+ },
+ false
+ ]
+ }
+ ],
+ "endpoint": {
+ "url": "https://budgets.c2s.ic.gov",
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingName": "budgets",
+ "signingRegion": "us-iso-east-1"
+ }
+ ]
+ },
+ "headers": {}
+ },
+ "type": "endpoint"
+ },
+ {
+ "conditions": [
+ {
+ "fn": "stringEquals",
+ "argv": [
+ {
+ "fn": "getAttr",
+ "argv": [
+ {
+ "ref": "PartitionResult"
+ },
+ "name"
+ ]
+ },
+ "aws-iso-b"
+ ]
+ },
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ {
+ "ref": "UseFIPS"
+ },
+ false
+ ]
+ },
+ {
+ "fn": "booleanEquals",
+ "argv": [
+ {
+ "ref": "UseDualStack"
+ },
+ false
+ ]
+ }
+ ],
+ "endpoint": {
+ "url": "https://budgets.global.sc2s.sgov.gov",
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingName": "budgets",
+ "signingRegion": "us-isob-east-1"
+ }
+ ]
+ },
+ "headers": {}
+ },
+ "type": "endpoint"
+ },
{
"conditions": [
{
@@ -864,6 +966,28 @@
"UseDualStack": false
}
},
+ {
+ "documentation": "For region aws-iso-global with FIPS disabled and DualStack disabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingName": "budgets",
+ "signingRegion": "us-iso-east-1"
+ }
+ ]
+ },
+ "url": "https://budgets.c2s.ic.gov"
+ }
+ },
+ "params": {
+ "Region": "aws-iso-global",
+ "UseFIPS": false,
+ "UseDualStack": false
+ }
+ },
{
"documentation": "For region us-iso-east-1 with FIPS enabled and DualStack enabled",
"expect": {
@@ -903,7 +1027,16 @@
"documentation": "For region us-iso-east-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://budgets.us-iso-east-1.c2s.ic.gov"
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingName": "budgets",
+ "signingRegion": "us-iso-east-1"
+ }
+ ]
+ },
+ "url": "https://budgets.c2s.ic.gov"
}
},
"params": {
@@ -912,6 +1045,28 @@
"UseDualStack": false
}
},
+ {
+ "documentation": "For region aws-iso-b-global with FIPS disabled and DualStack disabled",
+ "expect": {
+ "endpoint": {
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingName": "budgets",
+ "signingRegion": "us-isob-east-1"
+ }
+ ]
+ },
+ "url": "https://budgets.global.sc2s.sgov.gov"
+ }
+ },
+ "params": {
+ "Region": "aws-iso-b-global",
+ "UseFIPS": false,
+ "UseDualStack": false
+ }
+ },
{
"documentation": "For region us-isob-east-1 with FIPS enabled and DualStack enabled",
"expect": {
@@ -951,7 +1106,16 @@
"documentation": "For region us-isob-east-1 with FIPS disabled and DualStack disabled",
"expect": {
"endpoint": {
- "url": "https://budgets.us-isob-east-1.sc2s.sgov.gov"
+ "properties": {
+ "authSchemes": [
+ {
+ "name": "sigv4",
+ "signingName": "budgets",
+ "signingRegion": "us-isob-east-1"
+ }
+ ]
+ },
+ "url": "https://budgets.global.sc2s.sgov.gov"
}
},
"params": {
diff --git a/codegen/sdk-codegen/aws-models/cleanroomsml.json b/codegen/sdk-codegen/aws-models/cleanroomsml.json
index 04abfdca6d4..3ed70e8c21c 100644
--- a/codegen/sdk-codegen/aws-models/cleanroomsml.json
+++ b/codegen/sdk-codegen/aws-models/cleanroomsml.json
@@ -1034,6 +1034,9 @@
"traits": {
"smithy.api#documentation": "The protected SQL query parameters.
"
}
+ },
+ "sqlComputeConfiguration": {
+ "target": "com.amazonaws.cleanroomsml#ComputeConfiguration"
}
},
"traits": {
@@ -9925,7 +9928,7 @@
"dataSource": {
"target": "com.amazonaws.cleanroomsml#ModelInferenceDataSource",
"traits": {
- "smithy.api#documentation": "Defines he data source that is used for the trained model inference job.
",
+ "smithy.api#documentation": "Defines the data source that is used for the trained model inference job.
",
"smithy.api#required": {}
}
},
diff --git a/codegen/sdk-codegen/aws-models/cloud9.json b/codegen/sdk-codegen/aws-models/cloud9.json
index b5b96a0925c..806df005d26 100644
--- a/codegen/sdk-codegen/aws-models/cloud9.json
+++ b/codegen/sdk-codegen/aws-models/cloud9.json
@@ -85,7 +85,7 @@
"name": "cloud9"
},
"aws.protocols#awsJson1_1": {},
- "smithy.api#documentation": "Cloud9\n Cloud9 is a collection of tools that you can use to code, build, run, test, debug, and\n release software in the cloud.
\n For more information about Cloud9, see the Cloud9 User Guide.
\n Cloud9 supports these operations:
\n \n - \n
\n CreateEnvironmentEC2
: Creates an Cloud9 development environment, launches\n an Amazon EC2 instance, and then connects from the instance to the environment.
\n \n - \n
\n CreateEnvironmentMembership
: Adds an environment member to an\n environment.
\n \n - \n
\n DeleteEnvironment
: Deletes an environment. If an Amazon EC2 instance is\n connected to the environment, also terminates the instance.
\n \n - \n
\n DeleteEnvironmentMembership
: Deletes an environment member from an\n environment.
\n \n - \n
\n DescribeEnvironmentMemberships
: Gets information about environment\n members for an environment.
\n \n - \n
\n DescribeEnvironments
: Gets information about environments.
\n \n - \n
\n DescribeEnvironmentStatus
: Gets status information for an\n environment.
\n \n - \n
\n ListEnvironments
: Gets a list of environment identifiers.
\n \n - \n
\n ListTagsForResource
: Gets the tags for an environment.
\n \n - \n
\n TagResource
: Adds tags to an environment.
\n \n - \n
\n UntagResource
: Removes tags from an environment.
\n \n - \n
\n UpdateEnvironment
: Changes the settings of an existing\n environment.
\n \n - \n
\n UpdateEnvironmentMembership
: Changes the settings of an existing\n environment member for an environment.
\n \n
",
+ "smithy.api#documentation": "Cloud9\n Cloud9 is a collection of tools that you can use to code, build, run, test, debug, and\n release software in the cloud.
\n For more information about Cloud9, see the Cloud9 User Guide.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n \n Cloud9 supports these operations:
\n \n - \n
\n CreateEnvironmentEC2
: Creates an Cloud9 development environment, launches\n an Amazon EC2 instance, and then connects from the instance to the environment.
\n \n - \n
\n CreateEnvironmentMembership
: Adds an environment member to an\n environment.
\n \n - \n
\n DeleteEnvironment
: Deletes an environment. If an Amazon EC2 instance is\n connected to the environment, also terminates the instance.
\n \n - \n
\n DeleteEnvironmentMembership
: Deletes an environment member from an\n environment.
\n \n - \n
\n DescribeEnvironmentMemberships
: Gets information about environment\n members for an environment.
\n \n - \n
\n DescribeEnvironments
: Gets information about environments.
\n \n - \n
\n DescribeEnvironmentStatus
: Gets status information for an\n environment.
\n \n - \n
\n ListEnvironments
: Gets a list of environment identifiers.
\n \n - \n
\n ListTagsForResource
: Gets the tags for an environment.
\n \n - \n
\n TagResource
: Adds tags to an environment.
\n \n - \n
\n UntagResource
: Removes tags from an environment.
\n \n - \n
\n UpdateEnvironment
: Changes the settings of an existing\n environment.
\n \n - \n
\n UpdateEnvironmentMembership
: Changes the settings of an existing\n environment member for an environment.
\n \n
",
"smithy.api#title": "AWS Cloud9",
"smithy.rules#endpointRuleSet": {
"version": "1.0",
@@ -1116,7 +1116,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Creates an Cloud9 development environment, launches an Amazon Elastic Compute Cloud (Amazon EC2) instance, and\n then connects from the instance to the environment.
",
+ "smithy.api#documentation": "Creates an Cloud9 development environment, launches an Amazon Elastic Compute Cloud (Amazon EC2) instance, and\n then connects from the instance to the environment.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n ",
"smithy.api#examples": [
{
"title": "CreateEnvironmentEC2",
@@ -1176,7 +1176,7 @@
"imageId": {
"target": "com.amazonaws.cloud9#ImageId",
"traits": {
- "smithy.api#documentation": "The identifier for the Amazon Machine Image (AMI) that's used to create the EC2 instance.\n To choose an AMI for the instance, you must specify a valid AMI alias or a valid Amazon EC2 Systems Manager (SSM)\n path.
\n From December 04, 2023, you will be required to include the imageId
parameter\n for the CreateEnvironmentEC2
action. This change will be reflected across all\n direct methods of communicating with the API, such as Amazon Web Services SDK, Amazon Web Services CLI and Amazon Web Services\n CloudFormation. This change will only affect direct API consumers, and not Cloud9 console\n users.
\n We recommend using Amazon Linux 2023 as the AMI to create your environment as it is fully\n supported.
\n Since Ubuntu 18.04 has ended standard support as of May 31, 2023, we recommend you choose Ubuntu 22.04.
\n \n AMI aliases \n
\n \n - \n
Amazon Linux 2: amazonlinux-2-x86_64
\n
\n \n - \n
Amazon Linux 2023 (recommended): amazonlinux-2023-x86_64
\n
\n \n - \n
Ubuntu 18.04: ubuntu-18.04-x86_64
\n
\n \n - \n
Ubuntu 22.04: ubuntu-22.04-x86_64
\n
\n \n
\n \n SSM paths\n
\n \n - \n
Amazon Linux 2:\n resolve:ssm:/aws/service/cloud9/amis/amazonlinux-2-x86_64
\n
\n \n - \n
Amazon Linux 2023 (recommended): resolve:ssm:/aws/service/cloud9/amis/amazonlinux-2023-x86_64
\n
\n \n - \n
Ubuntu 18.04:\n resolve:ssm:/aws/service/cloud9/amis/ubuntu-18.04-x86_64
\n
\n \n - \n
Ubuntu 22.04:\n resolve:ssm:/aws/service/cloud9/amis/ubuntu-22.04-x86_64
\n
\n \n
",
+ "smithy.api#documentation": "The identifier for the Amazon Machine Image (AMI) that's used to create the EC2 instance.\n To choose an AMI for the instance, you must specify a valid AMI alias or a valid Amazon EC2 Systems Manager (SSM)\n path.
\n \n We recommend using Amazon Linux 2023 as the AMI to create your environment as it is fully\n supported.
\n From December 16, 2024, Ubuntu 18.04 will be removed from the list of available\n imageIds
for Cloud9. This change is necessary as Ubuntu 18.04 has ended standard\n support on May 31, 2023. This change will only affect direct API consumers, and not Cloud9\n console users.
\n Since Ubuntu 18.04 has ended standard support as of May 31, 2023, we recommend you choose\n Ubuntu 22.04.
\n \n AMI aliases \n
\n \n - \n
Amazon Linux 2: amazonlinux-2-x86_64
\n
\n \n - \n
Amazon Linux 2023 (recommended): amazonlinux-2023-x86_64
\n
\n \n - \n
Ubuntu 18.04: ubuntu-18.04-x86_64
\n
\n \n - \n
Ubuntu 22.04: ubuntu-22.04-x86_64
\n
\n \n
\n \n SSM paths\n
\n \n - \n
Amazon Linux 2:\n resolve:ssm:/aws/service/cloud9/amis/amazonlinux-2-x86_64
\n
\n \n - \n
Amazon Linux 2023 (recommended):\n resolve:ssm:/aws/service/cloud9/amis/amazonlinux-2023-x86_64
\n
\n \n - \n
Ubuntu 18.04:\n resolve:ssm:/aws/service/cloud9/amis/ubuntu-18.04-x86_64
\n
\n \n - \n
Ubuntu 22.04:\n resolve:ssm:/aws/service/cloud9/amis/ubuntu-22.04-x86_64
\n
\n \n
",
"smithy.api#required": {}
}
},
@@ -1261,7 +1261,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Adds an environment member to an Cloud9 development environment.
",
+ "smithy.api#documentation": "Adds an environment member to an Cloud9 development environment.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n ",
"smithy.api#examples": [
{
"title": "CreateEnvironmentMembership",
@@ -1360,7 +1360,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Deletes an Cloud9 development environment. If an Amazon EC2 instance is connected to the\n environment, also terminates the instance.
",
+ "smithy.api#documentation": "Deletes an Cloud9 development environment. If an Amazon EC2 instance is connected to the\n environment, also terminates the instance.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n ",
"smithy.api#examples": [
{
"title": "DeleteEnvironment",
@@ -1406,7 +1406,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Deletes an environment member from a development environment.
",
+ "smithy.api#documentation": "Deletes an environment member from a development environment.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n ",
"smithy.api#examples": [
{
"title": "DeleteEnvironmentMembership",
@@ -1504,31 +1504,8 @@
}
],
"traits": {
- "smithy.api#documentation": "Gets information about environment members for an Cloud9 development environment.
",
+ "smithy.api#documentation": "Gets information about environment members for an Cloud9 development environment.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n ",
"smithy.api#examples": [
- {
- "title": "DescribeEnvironmentMemberships1",
- "documentation": "The following example gets information about all of the environment members for the specified development environment.",
- "input": {
- "environmentId": "8d9967e2f0624182b74e7690ad69ebEX"
- },
- "output": {
- "memberships": [
- {
- "environmentId": "8d9967e2f0624182b74e7690ad69ebEX",
- "permissions": "read-write",
- "userArn": "arn:aws:iam::123456789012:user/AnotherDemoUser",
- "userId": "AIDAJ3BA6O2FMJWCWXHEX"
- },
- {
- "environmentId": "8d9967e2f0624182b74e7690ad69ebEX",
- "permissions": "owner",
- "userArn": "arn:aws:iam::123456789012:user/MyDemoUser",
- "userId": "AIDAJNUEDQAQWFELJDLEX"
- }
- ]
- }
- },
{
"title": "DescribeEnvironmentMemberships2",
"documentation": "The following example gets information about the owner of the specified development environment.",
@@ -1573,6 +1550,29 @@
}
]
}
+ },
+ {
+ "title": "DescribeEnvironmentMemberships1",
+ "documentation": "The following example gets information about all of the environment members for the specified development environment.",
+ "input": {
+ "environmentId": "8d9967e2f0624182b74e7690ad69ebEX"
+ },
+ "output": {
+ "memberships": [
+ {
+ "environmentId": "8d9967e2f0624182b74e7690ad69ebEX",
+ "permissions": "read-write",
+ "userArn": "arn:aws:iam::123456789012:user/AnotherDemoUser",
+ "userId": "AIDAJ3BA6O2FMJWCWXHEX"
+ },
+ {
+ "environmentId": "8d9967e2f0624182b74e7690ad69ebEX",
+ "permissions": "owner",
+ "userArn": "arn:aws:iam::123456789012:user/MyDemoUser",
+ "userId": "AIDAJNUEDQAQWFELJDLEX"
+ }
+ ]
+ }
}
],
"smithy.api#paginated": {
@@ -1672,7 +1672,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Gets status information for an Cloud9 development environment.
",
+ "smithy.api#documentation": "Gets status information for an Cloud9 development environment.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n ",
"smithy.api#examples": [
{
"title": "DescribeEnvironmentStatus",
@@ -1757,7 +1757,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Gets information about Cloud9 development environments.
",
+ "smithy.api#documentation": "Gets information about Cloud9 development environments.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n ",
"smithy.api#examples": [
{
"title": "DescribeEnvironments",
@@ -2228,7 +2228,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Gets a list of Cloud9 development environment identifiers.
",
+ "smithy.api#documentation": "Gets a list of Cloud9 development environment identifiers.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n \n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n ",
"smithy.api#examples": [
{
"title": "ListEnvironments",
@@ -2308,7 +2308,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Gets a list of the tags associated with an Cloud9 development environment.
"
+ "smithy.api#documentation": "Gets a list of the tags associated with an Cloud9 development environment.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n "
}
},
"com.amazonaws.cloud9#ListTagsForResourceRequest": {
@@ -2602,7 +2602,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Adds tags to an Cloud9 development environment.
\n \n Tags that you add to an Cloud9 environment by using this method will NOT be\n automatically propagated to underlying resources.
\n "
+ "smithy.api#documentation": "Adds tags to an Cloud9 development environment.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n \n \n Tags that you add to an Cloud9 environment by using this method will NOT be\n automatically propagated to underlying resources.
\n "
}
},
"com.amazonaws.cloud9#TagResourceRequest": {
@@ -2691,7 +2691,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Removes tags from an Cloud9 development environment.
"
+ "smithy.api#documentation": "Removes tags from an Cloud9 development environment.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n "
}
},
"com.amazonaws.cloud9#UntagResourceRequest": {
@@ -2755,7 +2755,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Changes the settings of an existing Cloud9 development environment.
",
+ "smithy.api#documentation": "Changes the settings of an existing Cloud9 development environment.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n ",
"smithy.api#examples": [
{
"title": "UpdateEnvironment",
@@ -2803,7 +2803,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Changes the settings of an existing environment member for an Cloud9 development\n environment.
",
+ "smithy.api#documentation": "Changes the settings of an existing environment member for an Cloud9 development\n environment.
\n \n Cloud9 is no longer available to new customers. Existing customers of \n Cloud9 can continue to use the service as normal. \n Learn more\"\n
\n ",
"smithy.api#examples": [
{
"title": "UpdateEnvironmentMembership",
diff --git a/codegen/sdk-codegen/aws-models/cloudfront.json b/codegen/sdk-codegen/aws-models/cloudfront.json
index 389634425be..e5652f64583 100644
--- a/codegen/sdk-codegen/aws-models/cloudfront.json
+++ b/codegen/sdk-codegen/aws-models/cloudfront.json
@@ -5553,13 +5553,13 @@
"OriginReadTimeout": {
"target": "com.amazonaws.cloudfront#integer",
"traits": {
- "smithy.api#documentation": "Specifies how long, in seconds, CloudFront waits for a response from the origin. This is\n\t\t\talso known as the origin response timeout. The minimum timeout is 1\n\t\t\tsecond, the maximum is 60 seconds, and the default (if you don't specify otherwise) is\n\t\t\t30 seconds.
\n For more information, see Origin Response Timeout in the\n\t\t\t\tAmazon CloudFront Developer Guide.
"
+ "smithy.api#documentation": "Specifies how long, in seconds, CloudFront waits for a response from the origin. This is\n\t\t\talso known as the origin response timeout. The minimum timeout is 1\n\t\t\tsecond, the maximum is 60 seconds, and the default (if you don't specify otherwise) is\n\t\t\t30 seconds.
\n For more information, see Response timeout (custom origins only) in the\n\t\t\t\tAmazon CloudFront Developer Guide.
"
}
},
"OriginKeepaliveTimeout": {
"target": "com.amazonaws.cloudfront#integer",
"traits": {
- "smithy.api#documentation": "Specifies how long, in seconds, CloudFront persists its connection to the origin. The\n\t\t\tminimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't\n\t\t\tspecify otherwise) is 5 seconds.
\n For more information, see Origin Keep-alive Timeout in the\n\t\t\t\tAmazon CloudFront Developer Guide.
"
+ "smithy.api#documentation": "Specifies how long, in seconds, CloudFront persists its connection to the origin. The\n\t\t\tminimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't\n\t\t\tspecify otherwise) is 5 seconds.
\n For more information, see Keep-alive timeout (custom origins only) in the\n\t\t\t\tAmazon CloudFront Developer Guide.
"
}
}
},
@@ -7079,7 +7079,7 @@
"DefaultRootObject": {
"target": "com.amazonaws.cloudfront#string",
"traits": {
- "smithy.api#documentation": "The object that you want CloudFront to request from your origin (for example,\n\t\t\t\tindex.html
) when a viewer requests the root URL for your distribution\n\t\t\t\t(https://www.example.com
) instead of an object in your distribution\n\t\t\t\t(https://www.example.com/product-description.html
). Specifying a\n\t\t\tdefault root object avoids exposing the contents of your distribution.
\n Specify only the object name, for example, index.html
. Don't add a\n\t\t\t\t/
before the object name.
\n If you don't want to specify a default root object when you create a distribution,\n\t\t\tinclude an empty DefaultRootObject
element.
\n To delete the default root object from an existing distribution, update the\n\t\t\tdistribution configuration and include an empty DefaultRootObject
\n\t\t\telement.
\n To replace the default root object, update the distribution configuration and specify\n\t\t\tthe new object.
\n For more information about the default root object, see Creating a\n\t\t\t\tDefault Root Object in the Amazon CloudFront Developer Guide.
"
+ "smithy.api#documentation": "When a viewer requests the root URL for your distribution, the default root object is the\n\t\t\tobject that you want CloudFront to request from your origin. For example, if your root URL is\n\t\t\t\thttps://www.example.com
, you can specify CloudFront to return the\n\t\t\t\tindex.html
file as the default root object. You can specify a default\n\t\t\troot object so that viewers see a specific file or object, instead of another object in\n\t\t\tyour distribution (for example,\n\t\t\t\thttps://www.example.com/product-description.html
). A default root\n\t\t\tobject avoids exposing the contents of your distribution.
\n You can specify the object name or a path to the object name (for example,\n\t\t\t\tindex.html
or exampleFolderName/index.html
). Your string\n\t\t\tcan't begin with a forward slash (/
). Only specify the object name or the\n\t\t\tpath to the object.
\n If you don't want to specify a default root object when you create a distribution,\n\t\t\tinclude an empty DefaultRootObject
element.
\n To delete the default root object from an existing distribution, update the\n\t\t\tdistribution configuration and include an empty DefaultRootObject
\n\t\t\telement.
\n To replace the default root object, update the distribution configuration and specify\n\t\t\tthe new object.
\n For more information about the default root object, see Specify a default root object in the Amazon CloudFront Developer Guide.
"
}
},
"Origins": {
@@ -20477,6 +20477,18 @@
"smithy.api#documentation": "The VPC origin ID.
",
"smithy.api#required": {}
}
+ },
+ "OriginReadTimeout": {
+ "target": "com.amazonaws.cloudfront#integer",
+ "traits": {
+ "smithy.api#documentation": "Specifies how long, in seconds, CloudFront waits for a response from the origin. This is\n\t\t\talso known as the origin response timeout. The minimum timeout is 1\n\t\t\tsecond, the maximum is 60 seconds, and the default (if you don't specify otherwise) is\n\t\t\t30 seconds.
\n For more information, see Response timeout (custom origins only) in the\n\t\t\tAmazon CloudFront Developer Guide.
"
+ }
+ },
+ "OriginKeepaliveTimeout": {
+ "target": "com.amazonaws.cloudfront#integer",
+ "traits": {
+ "smithy.api#documentation": "Specifies how long, in seconds, CloudFront persists its connection to the origin. The\n\t\t\tminimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't\n\t\t\tspecify otherwise) is 5 seconds.
\n For more information, see Keep-alive timeout (custom origins only) in the\n\t\t\tAmazon CloudFront Developer Guide.
"
+ }
}
},
"traits": {
diff --git a/codegen/sdk-codegen/aws-models/cloudhsm-v2.json b/codegen/sdk-codegen/aws-models/cloudhsm-v2.json
index bd920e51783..18c34751c07 100644
--- a/codegen/sdk-codegen/aws-models/cloudhsm-v2.json
+++ b/codegen/sdk-codegen/aws-models/cloudhsm-v2.json
@@ -1350,6 +1350,18 @@
"smithy.api#error": "client"
}
},
+ "com.amazonaws.cloudhsmv2#CloudHsmResourceLimitExceededException": {
+ "type": "structure",
+ "members": {
+ "Message": {
+ "target": "com.amazonaws.cloudhsmv2#errorMessage"
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The request was rejected because it exceeds an CloudHSM limit.
",
+ "smithy.api#error": "client"
+ }
+ },
"com.amazonaws.cloudhsmv2#CloudHsmResourceNotFoundException": {
"type": "structure",
"members": {
@@ -1467,6 +1479,12 @@
"smithy.api#documentation": "The identifier (ID) of the virtual private cloud (VPC) that contains the\n cluster.
"
}
},
+ "NetworkType": {
+ "target": "com.amazonaws.cloudhsmv2#NetworkType",
+ "traits": {
+ "smithy.api#documentation": "The cluster's NetworkType can be set to either IPV4 (which is the default) or DUALSTACK.\n When set to IPV4, communication between your application and the Hardware Security Modules (HSMs) is restricted to the IPv4 protocol only.\n In contrast, the DUALSTACK network type enables communication over both the IPv4 and IPv6 protocols.\n To use the DUALSTACK option, you'll need to configure your Virtual Private Cloud (VPC) and subnets to support both IPv4 and IPv6. This involves adding IPv6 Classless Inter-Domain Routing (CIDR) blocks to the existing IPv4 CIDR blocks in your subnets.\n The choice between IPV4 and DUALSTACK network types determines the flexibility of the network addressing setup for your cluster. The DUALSTACK option provides more flexibility by allowing both IPv4 and IPv6 communication.
"
+ }
+ },
"Certificates": {
"target": "com.amazonaws.cloudhsmv2#Certificates",
"traits": {
@@ -1552,6 +1570,18 @@
"smithy.api#enumValue": "UPDATE_IN_PROGRESS"
}
},
+ "MODIFY_IN_PROGRESS": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "MODIFY_IN_PROGRESS"
+ }
+ },
+ "ROLLBACK_IN_PROGRESS": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "ROLLBACK_IN_PROGRESS"
+ }
+ },
"DELETE_IN_PROGRESS": {
"target": "smithy.api#Unit",
"traits": {
@@ -1722,6 +1752,12 @@
"smithy.api#required": {}
}
},
+ "NetworkType": {
+ "target": "com.amazonaws.cloudhsmv2#NetworkType",
+ "traits": {
+ "smithy.api#documentation": "The NetworkType to create a cluster with. The allowed values are\n IPV4
and DUALSTACK
.\n
"
+ }
+ },
"TagList": {
"target": "com.amazonaws.cloudhsmv2#TagList",
"traits": {
@@ -2208,7 +2244,20 @@
"inputToken": "NextToken",
"outputToken": "NextToken",
"pageSize": "MaxResults"
- }
+ },
+ "smithy.test#smokeTests": [
+ {
+ "id": "DescribeClustersSuccess",
+ "params": {},
+ "vendorParams": {
+ "region": "us-west-2"
+ },
+ "vendorParamsShape": "aws.test#AwsVendorParams",
+ "expect": {
+ "success": {}
+ }
+ }
+ ]
}
},
"com.amazonaws.cloudhsmv2#DescribeClustersRequest": {
@@ -2421,6 +2470,12 @@
"smithy.api#documentation": "The IP address of the HSM's elastic network interface (ENI).
"
}
},
+ "EniIpV6": {
+ "target": "com.amazonaws.cloudhsmv2#IpV6Address",
+ "traits": {
+ "smithy.api#documentation": "The IPv6 address (if any) of the HSM's elastic network interface (ENI).
"
+ }
+ },
"HsmId": {
"target": "com.amazonaws.cloudhsmv2#HsmId",
"traits": {
@@ -2586,6 +2641,15 @@
"smithy.api#pattern": "^\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}$"
}
},
+ "com.amazonaws.cloudhsmv2#IpV6Address": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 100
+ }
+ }
+ },
"com.amazonaws.cloudhsmv2#ListTags": {
"type": "operation",
"input": {
@@ -2804,6 +2868,23 @@
"smithy.api#output": {}
}
},
+ "com.amazonaws.cloudhsmv2#NetworkType": {
+ "type": "enum",
+ "members": {
+ "IPV4": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "IPV4"
+ }
+ },
+ "DUALSTACK": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "DUALSTACK"
+ }
+ }
+ }
+ },
"com.amazonaws.cloudhsmv2#NextToken": {
"type": "string",
"traits": {
@@ -3088,6 +3169,9 @@
{
"target": "com.amazonaws.cloudhsmv2#CloudHsmInvalidRequestException"
},
+ {
+ "target": "com.amazonaws.cloudhsmv2#CloudHsmResourceLimitExceededException"
+ },
{
"target": "com.amazonaws.cloudhsmv2#CloudHsmResourceNotFoundException"
},
diff --git a/codegen/sdk-codegen/aws-models/cloudwatch-logs.json b/codegen/sdk-codegen/aws-models/cloudwatch-logs.json
index b0595b8e02a..d4a1f0549b8 100644
--- a/codegen/sdk-codegen/aws-models/cloudwatch-logs.json
+++ b/codegen/sdk-codegen/aws-models/cloudwatch-logs.json
@@ -5741,7 +5741,7 @@
"traits": {
"smithy.api#length": {
"min": 1,
- "max": 256
+ "max": 50
},
"smithy.api#pattern": "^[\\.\\-_/#A-Za-z0-9]+$"
}
@@ -5751,7 +5751,7 @@
"traits": {
"smithy.api#length": {
"min": 1,
- "max": 256
+ "max": 50
},
"smithy.api#pattern": "^[\\.\\-_/#A-Za-z0-9]+$"
}
diff --git a/codegen/sdk-codegen/aws-models/codepipeline.json b/codegen/sdk-codegen/aws-models/codepipeline.json
index 2181eea8e1c..edbae106e55 100644
--- a/codegen/sdk-codegen/aws-models/codepipeline.json
+++ b/codegen/sdk-codegen/aws-models/codepipeline.json
@@ -1155,7 +1155,7 @@
"category": {
"target": "com.amazonaws.codepipeline#ActionCategory",
"traits": {
- "smithy.api#documentation": "A category defines what kind of action can be taken in the stage, and constrains\n the provider type for the action. Valid categories are limited to one of the following\n values.
\n \n - \n
Source
\n \n - \n
Build
\n \n - \n
Test
\n \n - \n
Deploy
\n \n - \n
Invoke
\n \n - \n
Approval
\n \n
",
+ "smithy.api#documentation": "A category defines what kind of action can be taken in the stage, and constrains\n the provider type for the action. Valid categories are limited to one of the following\n values.
\n \n - \n
Source
\n \n - \n
Build
\n \n - \n
Test
\n \n - \n
Deploy
\n \n - \n
Invoke
\n \n - \n
Approval
\n \n - \n
Compute
\n \n
",
"smithy.api#required": {}
}
},
@@ -2955,7 +2955,7 @@
}
},
"traits": {
- "smithy.api#documentation": "The condition for the stage. A condition is made up of the rules and the result for\n the condition.
"
+ "smithy.api#documentation": "The condition for the stage. A condition is made up of the rules and the result for\n the condition. For more information about conditions, see Stage conditions.\n For more information about rules, see the CodePipeline rule\n reference.
"
}
},
"com.amazonaws.codepipeline#ConditionExecution": {
@@ -4024,7 +4024,7 @@
"category": {
"target": "com.amazonaws.codepipeline#ActionCategory",
"traits": {
- "smithy.api#documentation": "Defines what kind of action can be taken in the stage. The following are the valid\n values:
\n \n - \n
\n Source
\n
\n \n - \n
\n Build
\n
\n \n - \n
\n Test
\n
\n \n - \n
\n Deploy
\n
\n \n - \n
\n Approval
\n
\n \n - \n
\n Invoke
\n
\n \n
",
+ "smithy.api#documentation": "Defines what kind of action can be taken in the stage. The following are the valid\n values:
\n \n - \n
\n Source
\n
\n \n - \n
\n Build
\n
\n \n - \n
\n Test
\n
\n \n - \n
\n Deploy
\n
\n \n - \n
\n Approval
\n
\n \n - \n
\n Invoke
\n
\n \n - \n
\n Compute
\n
\n \n
",
"smithy.api#required": {}
}
},
@@ -5611,7 +5611,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Lists the rules for the condition.
"
+ "smithy.api#documentation": "Lists the rules for the condition. For more information about conditions, see Stage\n conditions. For more information about rules, see the CodePipeline rule reference.
"
}
},
"com.amazonaws.codepipeline#ListRuleTypesInput": {
@@ -8057,7 +8057,7 @@
"name": {
"target": "com.amazonaws.codepipeline#RuleName",
"traits": {
- "smithy.api#documentation": "The name of the rule that is created for the condition, such as\n CheckAllResults.
",
+ "smithy.api#documentation": "The name of the rule that is created for the condition, such as\n VariableCheck
.
",
"smithy.api#required": {}
}
},
@@ -8074,6 +8074,12 @@
"smithy.api#documentation": "The action configuration fields for the rule.
"
}
},
+ "commands": {
+ "target": "com.amazonaws.codepipeline#CommandList",
+ "traits": {
+ "smithy.api#documentation": "The shell commands to run with your commands rule in CodePipeline. All commands\n are supported except multi-line formats. While CodeBuild logs and permissions\n are used, you do not need to create any resources in CodeBuild.
\n \n Using compute time for this action will incur separate charges in CodeBuild.
\n "
+ }
+ },
"inputArtifacts": {
"target": "com.amazonaws.codepipeline#InputArtifactList",
"traits": {
@@ -8100,7 +8106,7 @@
}
},
"traits": {
- "smithy.api#documentation": "Represents information about the rule to be created for an associated condition. An\n example would be creating a new rule for an entry condition, such as a rule that checks\n for a test result before allowing the run to enter the deployment stage.
"
+ "smithy.api#documentation": "Represents information about the rule to be created for an associated condition. An\n example would be creating a new rule for an entry condition, such as a rule that checks\n for a test result before allowing the run to enter the deployment stage. For more\n information about conditions, see Stage conditions.\n For more information about rules, see the CodePipeline rule\n reference.
"
}
},
"com.amazonaws.codepipeline#RuleDeclarationList": {
diff --git a/codegen/sdk-codegen/aws-models/connect.json b/codegen/sdk-codegen/aws-models/connect.json
index 206fb10c275..2c95f39ef6d 100644
--- a/codegen/sdk-codegen/aws-models/connect.json
+++ b/codegen/sdk-codegen/aws-models/connect.json
@@ -979,6 +979,9 @@
{
"target": "com.amazonaws.connect#CreateHoursOfOperation"
},
+ {
+ "target": "com.amazonaws.connect#CreateHoursOfOperationOverride"
+ },
{
"target": "com.amazonaws.connect#CreateInstance"
},
@@ -1063,6 +1066,9 @@
{
"target": "com.amazonaws.connect#DeleteHoursOfOperation"
},
+ {
+ "target": "com.amazonaws.connect#DeleteHoursOfOperationOverride"
+ },
{
"target": "com.amazonaws.connect#DeleteInstance"
},
@@ -1144,6 +1150,9 @@
{
"target": "com.amazonaws.connect#DescribeHoursOfOperation"
},
+ {
+ "target": "com.amazonaws.connect#DescribeHoursOfOperationOverride"
+ },
{
"target": "com.amazonaws.connect#DescribeInstance"
},
@@ -1249,6 +1258,9 @@
{
"target": "com.amazonaws.connect#GetCurrentUserData"
},
+ {
+ "target": "com.amazonaws.connect#GetEffectiveHoursOfOperations"
+ },
{
"target": "com.amazonaws.connect#GetFederationToken"
},
@@ -1318,6 +1330,9 @@
{
"target": "com.amazonaws.connect#ListFlowAssociations"
},
+ {
+ "target": "com.amazonaws.connect#ListHoursOfOperationOverrides"
+ },
{
"target": "com.amazonaws.connect#ListHoursOfOperations"
},
@@ -1453,6 +1468,9 @@
{
"target": "com.amazonaws.connect#SearchEmailAddresses"
},
+ {
+ "target": "com.amazonaws.connect#SearchHoursOfOperationOverrides"
+ },
{
"target": "com.amazonaws.connect#SearchHoursOfOperations"
},
@@ -1603,12 +1621,18 @@
{
"target": "com.amazonaws.connect#UpdateHoursOfOperation"
},
+ {
+ "target": "com.amazonaws.connect#UpdateHoursOfOperationOverride"
+ },
{
"target": "com.amazonaws.connect#UpdateInstanceAttribute"
},
{
"target": "com.amazonaws.connect#UpdateInstanceStorageConfig"
},
+ {
+ "target": "com.amazonaws.connect#UpdateParticipantAuthentication"
+ },
{
"target": "com.amazonaws.connect#UpdateParticipantRoleConfig"
},
@@ -4203,6 +4227,28 @@
"smithy.api#default": 0
}
},
+ "com.amazonaws.connect#AuthenticationError": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 2048
+ },
+ "smithy.api#pattern": "^[\\x20-\\x21\\x23-\\x5B\\x5D-\\x7E]*$",
+ "smithy.api#sensitive": {}
+ }
+ },
+ "com.amazonaws.connect#AuthenticationErrorDescription": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 2048
+ },
+ "smithy.api#pattern": "^[\\x20-\\x21\\x23-\\x5B\\x5D-\\x7E]*$",
+ "smithy.api#sensitive": {}
+ }
+ },
"com.amazonaws.connect#AuthenticationProfile": {
"type": "structure",
"members": {
@@ -4362,6 +4408,16 @@
"target": "com.amazonaws.connect#AuthenticationProfileSummary"
}
},
+ "com.amazonaws.connect#AuthorizationCode": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 2048
+ },
+ "smithy.api#sensitive": {}
+ }
+ },
"com.amazonaws.connect#AutoAccept": {
"type": "boolean",
"traits": {
@@ -5309,6 +5365,18 @@
"target": "com.amazonaws.connect#CommonAttributeAndCondition"
}
},
+ "com.amazonaws.connect#CommonHumanReadableDescription": {
+ "type": "string",
+ "traits": {
+ "smithy.api#pattern": "^[\\P{C}\\r\\n\\t]{1,250}$"
+ }
+ },
+ "com.amazonaws.connect#CommonHumanReadableName": {
+ "type": "string",
+ "traits": {
+ "smithy.api#pattern": "^[\\P{C}\\r\\n\\t]{1,127}$"
+ }
+ },
"com.amazonaws.connect#CommonNameLength127": {
"type": "string",
"traits": {
@@ -5450,7 +5518,7 @@
}
},
"traits": {
- "smithy.api#documentation": "A conditional check failed.
",
+ "smithy.api#documentation": "Request processing failed because dependent condition failed.
",
"smithy.api#error": "client",
"smithy.api#httpError": 409
}
@@ -5623,6 +5691,12 @@
"smithy.api#documentation": "Information about Amazon Connect Wisdom.
"
}
},
+ "CustomerId": {
+ "target": "com.amazonaws.connect#CustomerId",
+ "traits": {
+ "smithy.api#documentation": "The customer's identification number. For example, the CustomerId
may be a\n customer number from your CRM. You can create a Lambda function to pull the unique customer ID of\n the caller from your CRM system. If you enable Amazon Connect Voice ID capability, this\n attribute is populated with the CustomerSpeakerId
of the caller.
"
+ }
+ },
"CustomerEndpoint": {
"target": "com.amazonaws.connect#EndpointInfo",
"traits": {
@@ -5742,7 +5816,7 @@
"ParticipantRole": {
"target": "com.amazonaws.connect#ParticipantRole",
"traits": {
- "smithy.api#documentation": "The role of the participant in the chat conversation.
"
+ "smithy.api#documentation": "The role of the participant in the chat conversation.
\n \n Only CUSTOMER
is currently supported. Any other values other than\n CUSTOMER
will result in an exception (4xx error).
\n "
}
},
"IncludeRawMessage": {
@@ -6051,6 +6125,18 @@
},
"StringCondition": {
"target": "com.amazonaws.connect#StringCondition"
+ },
+ "StateCondition": {
+ "target": "com.amazonaws.connect#ContactFlowModuleState",
+ "traits": {
+ "smithy.api#documentation": "The state of the flow.
"
+ }
+ },
+ "StatusCondition": {
+ "target": "com.amazonaws.connect#ContactFlowModuleStatus",
+ "traits": {
+ "smithy.api#documentation": "The status of the flow.
"
+ }
}
},
"traits": {
@@ -7816,6 +7902,118 @@
}
}
},
+ "com.amazonaws.connect#CreateHoursOfOperationOverride": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.connect#CreateHoursOfOperationOverrideRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.connect#CreateHoursOfOperationOverrideResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.connect#DuplicateResourceException"
+ },
+ {
+ "target": "com.amazonaws.connect#InternalServiceException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidParameterException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidRequestException"
+ },
+ {
+ "target": "com.amazonaws.connect#LimitExceededException"
+ },
+ {
+ "target": "com.amazonaws.connect#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.connect#ThrottlingException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Creates an hours of operation override in an Amazon Connect hours of operation\n resource
",
+ "smithy.api#http": {
+ "method": "PUT",
+ "uri": "/hours-of-operations/{InstanceId}/{HoursOfOperationId}/overrides",
+ "code": 200
+ }
+ }
+ },
+ "com.amazonaws.connect#CreateHoursOfOperationOverrideRequest": {
+ "type": "structure",
+ "members": {
+ "InstanceId": {
+ "target": "com.amazonaws.connect#InstanceId",
+ "traits": {
+ "smithy.api#documentation": "The identifier of the Amazon Connect instance.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "HoursOfOperationId": {
+ "target": "com.amazonaws.connect#HoursOfOperationId",
+ "traits": {
+ "smithy.api#documentation": "The identifier for the hours of operation
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "Name": {
+ "target": "com.amazonaws.connect#CommonHumanReadableName",
+ "traits": {
+ "smithy.api#documentation": "The name of the hours of operation override.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Description": {
+ "target": "com.amazonaws.connect#CommonHumanReadableDescription",
+ "traits": {
+ "smithy.api#documentation": "The description of the hours of operation override.
"
+ }
+ },
+ "Config": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideConfigList",
+ "traits": {
+ "smithy.api#documentation": "Configuration information for the hours of operation override: day, start time, and end\n time.
",
+ "smithy.api#required": {}
+ }
+ },
+ "EffectiveFrom": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideYearMonthDayDateFormat",
+ "traits": {
+ "smithy.api#documentation": "The date from when the hours of operation override would be effective.
",
+ "smithy.api#required": {}
+ }
+ },
+ "EffectiveTill": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideYearMonthDayDateFormat",
+ "traits": {
+ "smithy.api#documentation": "The date until when the hours of operation override would be effective.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.connect#CreateHoursOfOperationOverrideResponse": {
+ "type": "structure",
+ "members": {
+ "HoursOfOperationOverrideId": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideId",
+ "traits": {
+ "smithy.api#documentation": "The identifier for the hours of operation override.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.connect#CreateHoursOfOperationRequest": {
"type": "structure",
"members": {
@@ -8483,7 +8681,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Creates registration for a device token and a chat contact to receive real-time push\n notifications. For more information about push notifications, see Set up push\n notifications in Amazon Connect for mobile chat in the Amazon Connect\n Administrator Guide.
",
+ "smithy.api#documentation": "Creates registration for a device token and a chat contact to receive real-time push\n notifications. For more information about push notifications, see Set up push\n notifications in Amazon Connect for mobile chat in the Amazon Connect\n Administrator Guide.
",
"smithy.api#http": {
"method": "PUT",
"uri": "/push-notification/{InstanceId}/registrations",
@@ -8589,7 +8787,7 @@
}
],
"traits": {
- "smithy.api#documentation": "This API is in preview release for Amazon Connect and is subject to change.
\n Creates a new queue for the specified Amazon Connect instance.
\n \n \n - \n
If the phone number is claimed to a traffic distribution group that was created in the\n same Region as the Amazon Connect instance where you are calling this API, then you can use a\n full phone number ARN or a UUID for OutboundCallerIdNumberId
. However, if the phone number is claimed\n to a traffic distribution group that is in one Region, and you are calling this API from an instance in another Amazon Web Services Region that is associated with the traffic distribution group, you must provide a full phone number ARN. If a\n UUID is provided in this scenario, you will receive a\n ResourceNotFoundException
.
\n \n - \n
Only use the phone number ARN format that doesn't contain instance
in the\n path, for example, arn:aws:connect:us-east-1:1234567890:phone-number/uuid
. This\n is the same ARN format that is returned when you call the ListPhoneNumbersV2\n API.
\n \n - \n
If you plan to use IAM policies to allow/deny access to this API for phone\n number resources claimed to a traffic distribution group, see Allow or Deny queue API actions for phone numbers in a replica Region.
\n \n
\n ",
+ "smithy.api#documentation": "Creates a new queue for the specified Amazon Connect instance.
\n \n \n - \n
If the phone number is claimed to a traffic distribution group that was created in the\n same Region as the Amazon Connect instance where you are calling this API, then you can use a\n full phone number ARN or a UUID for OutboundCallerIdNumberId
. However, if the phone number is claimed\n to a traffic distribution group that is in one Region, and you are calling this API from an instance in another Amazon Web Services Region that is associated with the traffic distribution group, you must provide a full phone number ARN. If a\n UUID is provided in this scenario, you will receive a\n ResourceNotFoundException
.
\n \n - \n
Only use the phone number ARN format that doesn't contain instance
in the\n path, for example, arn:aws:connect:us-east-1:1234567890:phone-number/uuid
. This\n is the same ARN format that is returned when you call the ListPhoneNumbersV2\n API.
\n \n - \n
If you plan to use IAM policies to allow/deny access to this API for phone\n number resources claimed to a traffic distribution group, see Allow or Deny queue API actions for phone numbers in a replica Region.
\n \n
\n ",
"smithy.api#http": {
"method": "PUT",
"uri": "/queues/{InstanceId}",
@@ -10332,6 +10530,25 @@
"smithy.api#documentation": "Information about the Customer on the contact.
"
}
},
+ "com.amazonaws.connect#CustomerId": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 128
+ }
+ }
+ },
+ "com.amazonaws.connect#CustomerIdNonEmpty": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 128
+ },
+ "smithy.api#sensitive": {}
+ }
+ },
"com.amazonaws.connect#CustomerProfileAttributesSerialized": {
"type": "string"
},
@@ -10384,6 +10601,67 @@
"target": "com.amazonaws.connect#DataSetId"
}
},
+ "com.amazonaws.connect#DateComparisonType": {
+ "type": "enum",
+ "members": {
+ "GREATER_THAN": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "GREATER_THAN"
+ }
+ },
+ "LESS_THAN": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "LESS_THAN"
+ }
+ },
+ "GREATER_THAN_OR_EQUAL_TO": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "GREATER_THAN_OR_EQUAL_TO"
+ }
+ },
+ "LESS_THAN_OR_EQUAL_TO": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "LESS_THAN_OR_EQUAL_TO"
+ }
+ },
+ "EQUAL_TO": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "EQUAL_TO"
+ }
+ }
+ }
+ },
+ "com.amazonaws.connect#DateCondition": {
+ "type": "structure",
+ "members": {
+ "FieldName": {
+ "target": "com.amazonaws.connect#String",
+ "traits": {
+ "smithy.api#documentation": "An object to specify the hours of operation override date field.
"
+ }
+ },
+ "Value": {
+ "target": "com.amazonaws.connect#DateYearMonthDayFormat",
+ "traits": {
+ "smithy.api#documentation": "An object to specify the hours of operation override date value.
"
+ }
+ },
+ "ComparisonType": {
+ "target": "com.amazonaws.connect#DateComparisonType",
+ "traits": {
+ "smithy.api#documentation": "An object to specify the hours of operation override date condition\n comparisonType
.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "An object to specify the hours of operation override date condition.
"
+ }
+ },
"com.amazonaws.connect#DateReference": {
"type": "structure",
"members": {
@@ -10404,6 +10682,12 @@
"smithy.api#documentation": "Information about a reference when the referenceType
is DATE
.\n Otherwise, null.
"
}
},
+ "com.amazonaws.connect#DateYearMonthDayFormat": {
+ "type": "string",
+ "traits": {
+ "smithy.api#pattern": "^\\d{4}-\\d{2}-\\d{2}$"
+ }
+ },
"com.amazonaws.connect#DeactivateEvaluationForm": {
"type": "operation",
"input": {
@@ -10994,6 +11278,72 @@
}
}
},
+ "com.amazonaws.connect#DeleteHoursOfOperationOverride": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.connect#DeleteHoursOfOperationOverrideRequest"
+ },
+ "output": {
+ "target": "smithy.api#Unit"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.connect#InternalServiceException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidParameterException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidRequestException"
+ },
+ {
+ "target": "com.amazonaws.connect#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.connect#ThrottlingException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Deletes an hours of operation override in an Amazon Connect hours of operation\n resource
",
+ "smithy.api#http": {
+ "method": "DELETE",
+ "uri": "/hours-of-operations/{InstanceId}/{HoursOfOperationId}/overrides/{HoursOfOperationOverrideId}",
+ "code": 200
+ }
+ }
+ },
+ "com.amazonaws.connect#DeleteHoursOfOperationOverrideRequest": {
+ "type": "structure",
+ "members": {
+ "InstanceId": {
+ "target": "com.amazonaws.connect#InstanceId",
+ "traits": {
+ "smithy.api#documentation": "The identifier of the Amazon Connect instance.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "HoursOfOperationId": {
+ "target": "com.amazonaws.connect#HoursOfOperationId",
+ "traits": {
+ "smithy.api#documentation": "The identifier for the hours of operation.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "HoursOfOperationOverrideId": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideId",
+ "traits": {
+ "smithy.api#documentation": "The identifier for the hours of operation override.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
"com.amazonaws.connect#DeleteHoursOfOperationRequest": {
"type": "structure",
"members": {
@@ -12829,6 +13179,86 @@
}
}
},
+ "com.amazonaws.connect#DescribeHoursOfOperationOverride": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.connect#DescribeHoursOfOperationOverrideRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.connect#DescribeHoursOfOperationOverrideResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.connect#InternalServiceException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidParameterException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidRequestException"
+ },
+ {
+ "target": "com.amazonaws.connect#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.connect#ThrottlingException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Describes the hours of operation override.
",
+ "smithy.api#http": {
+ "method": "GET",
+ "uri": "/hours-of-operations/{InstanceId}/{HoursOfOperationId}/overrides/{HoursOfOperationOverrideId}",
+ "code": 200
+ }
+ }
+ },
+ "com.amazonaws.connect#DescribeHoursOfOperationOverrideRequest": {
+ "type": "structure",
+ "members": {
+ "InstanceId": {
+ "target": "com.amazonaws.connect#InstanceId",
+ "traits": {
+ "smithy.api#documentation": "The identifier of the Amazon Connect instance.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "HoursOfOperationId": {
+ "target": "com.amazonaws.connect#HoursOfOperationId",
+ "traits": {
+ "smithy.api#documentation": "The identifier for the hours of operation.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "HoursOfOperationOverrideId": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideId",
+ "traits": {
+ "smithy.api#documentation": "The identifier for the hours of operation override.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.connect#DescribeHoursOfOperationOverrideResponse": {
+ "type": "structure",
+ "members": {
+ "HoursOfOperationOverride": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverride",
+ "traits": {
+ "smithy.api#documentation": "Information about the hours of operations override.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.connect#DescribeHoursOfOperationRequest": {
"type": "structure",
"members": {
@@ -15284,6 +15714,32 @@
"com.amazonaws.connect#DurationInSeconds": {
"type": "integer"
},
+ "com.amazonaws.connect#EffectiveHoursOfOperationList": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.connect#EffectiveHoursOfOperations"
+ }
+ },
+ "com.amazonaws.connect#EffectiveHoursOfOperations": {
+ "type": "structure",
+ "members": {
+ "Date": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideYearMonthDayDateFormat",
+ "traits": {
+ "smithy.api#documentation": "The date that the hours of operation or overrides applies to.
"
+ }
+ },
+ "OperationalHours": {
+ "target": "com.amazonaws.connect#OperationalHours",
+ "traits": {
+ "smithy.api#documentation": "Information about the hours of operations with the effective override applied.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Information about the hours of operations with the effective override applied.
"
+ }
+ },
"com.amazonaws.connect#Email": {
"type": "string",
"traits": {
@@ -18110,6 +18566,100 @@
"smithy.api#output": {}
}
},
+ "com.amazonaws.connect#GetEffectiveHoursOfOperations": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.connect#GetEffectiveHoursOfOperationsRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.connect#GetEffectiveHoursOfOperationsResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.connect#InternalServiceException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidParameterException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidRequestException"
+ },
+ {
+ "target": "com.amazonaws.connect#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.connect#ThrottlingException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Get the hours of operations with the effective override applied.
",
+ "smithy.api#http": {
+ "method": "GET",
+ "uri": "/effective-hours-of-operations/{InstanceId}/{HoursOfOperationId}",
+ "code": 200
+ }
+ }
+ },
+ "com.amazonaws.connect#GetEffectiveHoursOfOperationsRequest": {
+ "type": "structure",
+ "members": {
+ "InstanceId": {
+ "target": "com.amazonaws.connect#InstanceId",
+ "traits": {
+ "smithy.api#documentation": "The identifier of the Amazon Connect instance.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "HoursOfOperationId": {
+ "target": "com.amazonaws.connect#HoursOfOperationId",
+ "traits": {
+ "smithy.api#documentation": "The identifier for the hours of operation.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "FromDate": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideYearMonthDayDateFormat",
+ "traits": {
+ "smithy.api#documentation": "The Date from when the hours of operation are listed.
",
+ "smithy.api#httpQuery": "fromDate",
+ "smithy.api#required": {}
+ }
+ },
+ "ToDate": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideYearMonthDayDateFormat",
+ "traits": {
+ "smithy.api#documentation": "The Date until when the hours of operation are listed.
",
+ "smithy.api#httpQuery": "toDate",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.connect#GetEffectiveHoursOfOperationsResponse": {
+ "type": "structure",
+ "members": {
+ "EffectiveHoursOfOperationList": {
+ "target": "com.amazonaws.connect#EffectiveHoursOfOperationList",
+ "traits": {
+ "smithy.api#documentation": "Information about the effective hours of operations
"
+ }
+ },
+ "TimeZone": {
+ "target": "com.amazonaws.connect#TimeZone",
+ "traits": {
+ "smithy.api#documentation": "The time zone for the hours of operation.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.connect#GetFederationToken": {
"type": "operation",
"input": {
@@ -19812,6 +20362,156 @@
"com.amazonaws.connect#HoursOfOperationName": {
"type": "string"
},
+ "com.amazonaws.connect#HoursOfOperationOverride": {
+ "type": "structure",
+ "members": {
+ "HoursOfOperationOverrideId": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideId",
+ "traits": {
+ "smithy.api#documentation": "The identifier for the hours of operation override.
"
+ }
+ },
+ "HoursOfOperationId": {
+ "target": "com.amazonaws.connect#HoursOfOperationId",
+ "traits": {
+ "smithy.api#documentation": "The identifier for the hours of operation.
"
+ }
+ },
+ "HoursOfOperationArn": {
+ "target": "com.amazonaws.connect#ARN",
+ "traits": {
+ "smithy.api#documentation": "The Amazon Resource Name (ARN) for the hours of operation.
"
+ }
+ },
+ "Name": {
+ "target": "com.amazonaws.connect#CommonHumanReadableName",
+ "traits": {
+ "smithy.api#documentation": "The name of the hours of operation override.
"
+ }
+ },
+ "Description": {
+ "target": "com.amazonaws.connect#CommonHumanReadableDescription",
+ "traits": {
+ "smithy.api#documentation": "The description of the hours of operation override.
"
+ }
+ },
+ "Config": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideConfigList",
+ "traits": {
+ "smithy.api#documentation": "Configuration information for the hours of operation override: day, start time, and end\n time.
"
+ }
+ },
+ "EffectiveFrom": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideYearMonthDayDateFormat",
+ "traits": {
+ "smithy.api#documentation": "The date from which the hours of operation override would be effective.
"
+ }
+ },
+ "EffectiveTill": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideYearMonthDayDateFormat",
+ "traits": {
+ "smithy.api#documentation": "The date till which the hours of operation override would be effective.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Information about the hours of operations override.
"
+ }
+ },
+ "com.amazonaws.connect#HoursOfOperationOverrideConfig": {
+ "type": "structure",
+ "members": {
+ "Day": {
+ "target": "com.amazonaws.connect#OverrideDays",
+ "traits": {
+ "smithy.api#documentation": "The day that the hours of operation override applies to.
"
+ }
+ },
+ "StartTime": {
+ "target": "com.amazonaws.connect#OverrideTimeSlice",
+ "traits": {
+ "smithy.api#documentation": "The start time when your contact center opens if overrides are applied.
"
+ }
+ },
+ "EndTime": {
+ "target": "com.amazonaws.connect#OverrideTimeSlice",
+ "traits": {
+ "smithy.api#documentation": "The end time that your contact center closes if overrides are applied.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Information about the hours of operation override config: day, start time, and end\n time.
"
+ }
+ },
+ "com.amazonaws.connect#HoursOfOperationOverrideConfigList": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideConfig"
+ },
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 100
+ }
+ }
+ },
+ "com.amazonaws.connect#HoursOfOperationOverrideId": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 36
+ }
+ }
+ },
+ "com.amazonaws.connect#HoursOfOperationOverrideList": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverride"
+ }
+ },
+ "com.amazonaws.connect#HoursOfOperationOverrideSearchConditionList": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideSearchCriteria"
+ }
+ },
+ "com.amazonaws.connect#HoursOfOperationOverrideSearchCriteria": {
+ "type": "structure",
+ "members": {
+ "OrConditions": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideSearchConditionList",
+ "traits": {
+ "smithy.api#documentation": "A list of conditions which would be applied together with an OR condition.
"
+ }
+ },
+ "AndConditions": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideSearchConditionList",
+ "traits": {
+ "smithy.api#documentation": "A list of conditions which would be applied together with an AND condition.
"
+ }
+ },
+ "StringCondition": {
+ "target": "com.amazonaws.connect#StringCondition"
+ },
+ "DateCondition": {
+ "target": "com.amazonaws.connect#DateCondition",
+ "traits": {
+ "smithy.api#documentation": "A leaf node condition which can be used to specify a date condition.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The search criteria to be used to return hours of operations overrides.
"
+ }
+ },
+ "com.amazonaws.connect#HoursOfOperationOverrideYearMonthDayDateFormat": {
+ "type": "string",
+ "traits": {
+ "smithy.api#pattern": "^\\d{4}-\\d{2}-\\d{2}$"
+ }
+ },
"com.amazonaws.connect#HoursOfOperationSearchConditionList": {
"type": "list",
"member": {
@@ -20317,6 +21017,12 @@
"traits": {
"smithy.api#enumValue": "ENHANCED_CHAT_MONITORING"
}
+ },
+ "MULTI_PARTY_CHAT_CONFERENCE": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "MULTI_PARTY_CHAT_CONFERENCE"
+ }
}
}
},
@@ -20799,6 +21505,12 @@
"traits": {
"smithy.api#enumValue": "CALL_TRANSFER_CONNECTOR"
}
+ },
+ "COGNITO_USER_POOL": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "COGNITO_USER_POOL"
+ }
}
}
},
@@ -21691,7 +22403,7 @@
}
],
"traits": {
- "smithy.api#documentation": "This API is in preview release for Amazon Connect and is subject to change.
\n For the specified version of Amazon Lex, returns a paginated list of all the Amazon Lex bots currently associated with the instance. Use this API to returns both Amazon Lex V1 and V2 bots.
",
+ "smithy.api#documentation": "This API is in preview release for Amazon Connect and is subject to change.
\n For the specified version of Amazon Lex, returns a paginated list of all the Amazon Lex bots currently associated with the instance. Use this API to return both Amazon Lex V1 and V2 bots.
",
"smithy.api#http": {
"method": "GET",
"uri": "/instance/{InstanceId}/bots",
@@ -22678,6 +23390,116 @@
"smithy.api#output": {}
}
},
+ "com.amazonaws.connect#ListHoursOfOperationOverrides": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.connect#ListHoursOfOperationOverridesRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.connect#ListHoursOfOperationOverridesResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.connect#InternalServiceException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidParameterException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidRequestException"
+ },
+ {
+ "target": "com.amazonaws.connect#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.connect#ThrottlingException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "List the hours of operation overrides.
",
+ "smithy.api#http": {
+ "method": "GET",
+ "uri": "/hours-of-operations/{InstanceId}/{HoursOfOperationId}/overrides",
+ "code": 200
+ },
+ "smithy.api#paginated": {
+ "inputToken": "NextToken",
+ "outputToken": "NextToken",
+ "items": "HoursOfOperationOverrideList",
+ "pageSize": "MaxResults"
+ }
+ }
+ },
+ "com.amazonaws.connect#ListHoursOfOperationOverridesRequest": {
+ "type": "structure",
+ "members": {
+ "InstanceId": {
+ "target": "com.amazonaws.connect#InstanceId",
+ "traits": {
+ "smithy.api#documentation": "The identifier of the Amazon Connect instance.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "HoursOfOperationId": {
+ "target": "com.amazonaws.connect#HoursOfOperationId",
+ "traits": {
+ "smithy.api#documentation": "The identifier for the hours of operation
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "NextToken": {
+ "target": "com.amazonaws.connect#NextToken",
+ "traits": {
+ "smithy.api#documentation": "The token for the next set of results. Use the value returned in the previous response in\n the next request to retrieve the next set of results.
",
+ "smithy.api#httpQuery": "nextToken"
+ }
+ },
+ "MaxResults": {
+ "target": "com.amazonaws.connect#MaxResult100",
+ "traits": {
+ "smithy.api#documentation": "The maximum number of results to return per page. The default MaxResult size is 100. Valid\n Range: Minimum value of 1. Maximum value of 1000.
",
+ "smithy.api#httpQuery": "maxResults"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.connect#ListHoursOfOperationOverridesResponse": {
+ "type": "structure",
+ "members": {
+ "NextToken": {
+ "target": "com.amazonaws.connect#NextToken",
+ "traits": {
+ "smithy.api#documentation": "The token for the next set of results. Use the value returned in the previous response in\n the next request to retrieve the next set of results.
"
+ }
+ },
+ "HoursOfOperationOverrideList": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideList",
+ "traits": {
+ "smithy.api#documentation": "Information about the hours of operation override.
"
+ }
+ },
+ "LastModifiedRegion": {
+ "target": "com.amazonaws.connect#RegionName",
+ "traits": {
+ "smithy.api#documentation": "The AWS Region where this resource was last modified.
"
+ }
+ },
+ "LastModifiedTime": {
+ "target": "com.amazonaws.connect#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "The timestamp when this resource was last modified.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.connect#ListHoursOfOperations": {
"type": "operation",
"input": {
@@ -26723,6 +27545,32 @@
}
}
},
+ "com.amazonaws.connect#OperationalHour": {
+ "type": "structure",
+ "members": {
+ "Start": {
+ "target": "com.amazonaws.connect#OverrideTimeSlice",
+ "traits": {
+ "smithy.api#documentation": "The start time that your contact center opens.
"
+ }
+ },
+ "End": {
+ "target": "com.amazonaws.connect#OverrideTimeSlice",
+ "traits": {
+ "smithy.api#documentation": "The end time that your contact center closes.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Information about the hours of operations with the effective override applied.
"
+ }
+ },
+ "com.amazonaws.connect#OperationalHours": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.connect#OperationalHour"
+ }
+ },
"com.amazonaws.connect#Origin": {
"type": "string",
"traits": {
@@ -26929,6 +27777,77 @@
"smithy.api#httpError": 404
}
},
+ "com.amazonaws.connect#OverrideDays": {
+ "type": "enum",
+ "members": {
+ "SUNDAY": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "SUNDAY"
+ }
+ },
+ "MONDAY": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "MONDAY"
+ }
+ },
+ "TUESDAY": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "TUESDAY"
+ }
+ },
+ "WEDNESDAY": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "WEDNESDAY"
+ }
+ },
+ "THURSDAY": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "THURSDAY"
+ }
+ },
+ "FRIDAY": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "FRIDAY"
+ }
+ },
+ "SATURDAY": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "SATURDAY"
+ }
+ }
+ }
+ },
+ "com.amazonaws.connect#OverrideTimeSlice": {
+ "type": "structure",
+ "members": {
+ "Hours": {
+ "target": "com.amazonaws.connect#Hours24Format",
+ "traits": {
+ "smithy.api#default": null,
+ "smithy.api#documentation": "The hours.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Minutes": {
+ "target": "com.amazonaws.connect#MinutesLimit60",
+ "traits": {
+ "smithy.api#default": null,
+ "smithy.api#documentation": "The minutes.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The start time or end time for an hours of operation override.
"
+ }
+ },
"com.amazonaws.connect#PEM": {
"type": "string",
"traits": {
@@ -33469,6 +34388,108 @@
"smithy.api#output": {}
}
},
+ "com.amazonaws.connect#SearchHoursOfOperationOverrides": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.connect#SearchHoursOfOperationOverridesRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.connect#SearchHoursOfOperationOverridesResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.connect#InternalServiceException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidParameterException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidRequestException"
+ },
+ {
+ "target": "com.amazonaws.connect#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.connect#ThrottlingException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Searches the hours of operation overrides.
",
+ "smithy.api#http": {
+ "method": "POST",
+ "uri": "/search-hours-of-operation-overrides",
+ "code": 200
+ },
+ "smithy.api#paginated": {
+ "inputToken": "NextToken",
+ "outputToken": "NextToken",
+ "items": "HoursOfOperationOverrides",
+ "pageSize": "MaxResults"
+ }
+ }
+ },
+ "com.amazonaws.connect#SearchHoursOfOperationOverridesRequest": {
+ "type": "structure",
+ "members": {
+ "InstanceId": {
+ "target": "com.amazonaws.connect#InstanceId",
+ "traits": {
+ "smithy.api#documentation": "The identifier of the Amazon Connect instance.
",
+ "smithy.api#required": {}
+ }
+ },
+ "NextToken": {
+ "target": "com.amazonaws.connect#NextToken2500",
+ "traits": {
+ "smithy.api#documentation": "The token for the next set of results. Use the value returned in the previous response in\n the next request to retrieve the next set of results. Length Constraints: Minimum length of 1.\n Maximum length of 2500.
"
+ }
+ },
+ "MaxResults": {
+ "target": "com.amazonaws.connect#MaxResult100",
+ "traits": {
+ "smithy.api#documentation": "The maximum number of results to return per page. Valid Range: Minimum value of 1. Maximum\n value of 100.
"
+ }
+ },
+ "SearchFilter": {
+ "target": "com.amazonaws.connect#HoursOfOperationSearchFilter"
+ },
+ "SearchCriteria": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideSearchCriteria",
+ "traits": {
+ "smithy.api#documentation": "The search criteria to be used to return hours of operations overrides.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.connect#SearchHoursOfOperationOverridesResponse": {
+ "type": "structure",
+ "members": {
+ "HoursOfOperationOverrides": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideList",
+ "traits": {
+ "smithy.api#documentation": "Information about the hours of operations overrides.
"
+ }
+ },
+ "NextToken": {
+ "target": "com.amazonaws.connect#NextToken2500",
+ "traits": {
+ "smithy.api#documentation": "The token for the next set of results. Use the value returned in the previous response in\n the next request to retrieve the next set of results. Length Constraints: Minimum length of 1.\n Maximum length of 2500.
"
+ }
+ },
+ "ApproximateTotalCount": {
+ "target": "com.amazonaws.connect#ApproximateTotalCount",
+ "traits": {
+ "smithy.api#documentation": "The total number of hours of operations which matched your search query.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.connect#SearchHoursOfOperations": {
"type": "operation",
"input": {
@@ -35974,6 +36995,12 @@
"traits": {
"smithy.api#documentation": "A set of system defined key-value pairs stored on individual contact segments using an\n attribute map. The attributes are standard Amazon Connect attributes. They can be accessed in\n flows.
\n Attribute keys can include only alphanumeric, -, and _.
\n This field can be used to show channel subtype, such as connect:Guide
.
\n \n The types application/vnd.amazonaws.connect.message.interactive
and\n application/vnd.amazonaws.connect.message.interactive.response
must be present in\n the SupportedMessagingContentTypes field of this API in order to set\n SegmentAttributes
as { \"connect:Subtype\": {\"valueString\" : \"connect:Guide\"\n }}
.
\n "
}
+ },
+ "CustomerId": {
+ "target": "com.amazonaws.connect#CustomerIdNonEmpty",
+ "traits": {
+ "smithy.api#documentation": "The customer's identification number. For example, the CustomerId
may be a\n customer number from your CRM.
"
+ }
}
},
"traits": {
@@ -40578,6 +41605,108 @@
}
}
},
+ "com.amazonaws.connect#UpdateHoursOfOperationOverride": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.connect#UpdateHoursOfOperationOverrideRequest"
+ },
+ "output": {
+ "target": "smithy.api#Unit"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.connect#ConditionalOperationFailedException"
+ },
+ {
+ "target": "com.amazonaws.connect#DuplicateResourceException"
+ },
+ {
+ "target": "com.amazonaws.connect#InternalServiceException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidParameterException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidRequestException"
+ },
+ {
+ "target": "com.amazonaws.connect#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.connect#ThrottlingException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Update the hours of operation override.
",
+ "smithy.api#http": {
+ "method": "POST",
+ "uri": "/hours-of-operations/{InstanceId}/{HoursOfOperationId}/overrides/{HoursOfOperationOverrideId}",
+ "code": 200
+ }
+ }
+ },
+ "com.amazonaws.connect#UpdateHoursOfOperationOverrideRequest": {
+ "type": "structure",
+ "members": {
+ "InstanceId": {
+ "target": "com.amazonaws.connect#InstanceId",
+ "traits": {
+ "smithy.api#documentation": "The identifier of the Amazon Connect instance.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "HoursOfOperationId": {
+ "target": "com.amazonaws.connect#HoursOfOperationId",
+ "traits": {
+ "smithy.api#documentation": "The identifier for the hours of operation.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "HoursOfOperationOverrideId": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideId",
+ "traits": {
+ "smithy.api#documentation": "The identifier for the hours of operation override.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ },
+ "Name": {
+ "target": "com.amazonaws.connect#CommonHumanReadableName",
+ "traits": {
+ "smithy.api#documentation": "The name of the hours of operation override.
"
+ }
+ },
+ "Description": {
+ "target": "com.amazonaws.connect#CommonHumanReadableDescription",
+ "traits": {
+ "smithy.api#documentation": "The description of the hours of operation override.
"
+ }
+ },
+ "Config": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideConfigList",
+ "traits": {
+ "smithy.api#documentation": "Configuration information for the hours of operation override: day, start time, and end\n time.
"
+ }
+ },
+ "EffectiveFrom": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideYearMonthDayDateFormat",
+ "traits": {
+ "smithy.api#documentation": "The date from when the hours of operation override would be effective.
"
+ }
+ },
+ "EffectiveTill": {
+ "target": "com.amazonaws.connect#HoursOfOperationOverrideYearMonthDayDateFormat",
+ "traits": {
+ "smithy.api#documentation": "The date till when the hours of operation override would be effective.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
"com.amazonaws.connect#UpdateHoursOfOperationRequest": {
"type": "structure",
"members": {
@@ -40763,6 +41892,90 @@
"smithy.api#input": {}
}
},
+ "com.amazonaws.connect#UpdateParticipantAuthentication": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.connect#UpdateParticipantAuthenticationRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.connect#UpdateParticipantAuthenticationResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.connect#AccessDeniedException"
+ },
+ {
+ "target": "com.amazonaws.connect#ConflictException"
+ },
+ {
+ "target": "com.amazonaws.connect#InternalServiceException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidParameterException"
+ },
+ {
+ "target": "com.amazonaws.connect#InvalidRequestException"
+ },
+ {
+ "target": "com.amazonaws.connect#ThrottlingException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Instructs Amazon Connect to resume the authentication process. The subsequent actions\n depend on the request body contents:
\n \n - \n
\n If a code is provided: Connect retrieves the identity\n information from Amazon Cognito and imports it into Connect Customer Profiles.
\n \n - \n
\n If an error is provided: The error branch of the\n Authenticate Customer block is executed.
\n \n
\n \n The API returns a success response to acknowledge the request. However, the interaction and\n exchange of identity information occur asynchronously after the response is returned.
\n ",
+ "smithy.api#http": {
+ "method": "POST",
+ "uri": "/contact/update-participant-authentication",
+ "code": 200
+ }
+ }
+ },
+ "com.amazonaws.connect#UpdateParticipantAuthenticationRequest": {
+ "type": "structure",
+ "members": {
+ "State": {
+ "target": "com.amazonaws.connect#ParticipantToken",
+ "traits": {
+ "smithy.api#documentation": "The state
query parameter that was provided by Cognito in the\n redirectUri
. This will also match the state
parameter provided in the\n AuthenticationUrl
from the GetAuthenticationUrl\n response.
",
+ "smithy.api#required": {}
+ }
+ },
+ "InstanceId": {
+ "target": "com.amazonaws.connect#InstanceId",
+ "traits": {
+ "smithy.api#documentation": "The identifier of the Amazon Connect instance. You can find the instance ID in the Amazon Resource Name (ARN) of the instance.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Code": {
+ "target": "com.amazonaws.connect#AuthorizationCode",
+ "traits": {
+ "smithy.api#documentation": "The code
query parameter provided by Cognito in the\n redirectUri
.
"
+ }
+ },
+ "Error": {
+ "target": "com.amazonaws.connect#AuthenticationError",
+ "traits": {
+ "smithy.api#documentation": "The error
query parameter provided by Cognito in the\n redirectUri
.
"
+ }
+ },
+ "ErrorDescription": {
+ "target": "com.amazonaws.connect#AuthenticationErrorDescription",
+ "traits": {
+ "smithy.api#documentation": "The error_description
parameter provided by Cognito in the\n redirectUri
.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.connect#UpdateParticipantAuthenticationResponse": {
+ "type": "structure",
+ "members": {},
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.connect#UpdateParticipantRoleConfig": {
"type": "operation",
"input": {
@@ -43616,13 +44829,13 @@
"FirstName": {
"target": "com.amazonaws.connect#AgentFirstName",
"traits": {
- "smithy.api#documentation": "The first name. This is required if you are using Amazon Connect or SAML for identity\n management.
"
+ "smithy.api#documentation": "The first name. This is required if you are using Amazon Connect or SAML for identity\n management. Inputs must be in Unicode Normalization Form C (NFC). Text containing characters in a\n non-NFC form (for example, decomposed characters or combining marks) are not accepted.
"
}
},
"LastName": {
"target": "com.amazonaws.connect#AgentLastName",
"traits": {
- "smithy.api#documentation": "The last name. This is required if you are using Amazon Connect or SAML for identity\n management.
"
+ "smithy.api#documentation": "The last name. This is required if you are using Amazon Connect or SAML for identity\n management. Inputs must be in Unicode Normalization Form C (NFC). Text containing characters in a\n non-NFC form (for example, decomposed characters or combining marks) are not accepted.
"
}
},
"Email": {
@@ -44839,7 +46052,7 @@
"IvrRecordingTrack": {
"target": "com.amazonaws.connect#IvrRecordingTrack",
"traits": {
- "smithy.api#documentation": "Identifies which IVR track is being recorded.
"
+ "smithy.api#documentation": "Identifies which IVR track is being recorded.
\n One and only one of the track configurations should be presented in the request.
"
}
}
},
diff --git a/codegen/sdk-codegen/aws-models/connectparticipant.json b/codegen/sdk-codegen/aws-models/connectparticipant.json
index 2992b78527b..809c6b5ce86 100644
--- a/codegen/sdk-codegen/aws-models/connectparticipant.json
+++ b/codegen/sdk-codegen/aws-models/connectparticipant.json
@@ -52,6 +52,9 @@
"type": "service",
"version": "2018-09-07",
"operations": [
+ {
+ "target": "com.amazonaws.connectparticipant#CancelParticipantAuthentication"
+ },
{
"target": "com.amazonaws.connectparticipant#CompleteAttachmentUpload"
},
@@ -67,6 +70,9 @@
{
"target": "com.amazonaws.connectparticipant#GetAttachment"
},
+ {
+ "target": "com.amazonaws.connectparticipant#GetAuthenticationUrl"
+ },
{
"target": "com.amazonaws.connectparticipant#GetTranscript"
},
@@ -92,7 +98,7 @@
"name": "execute-api"
},
"aws.protocols#restJson1": {},
- "smithy.api#documentation": "Amazon Connect is an easy-to-use omnichannel cloud contact center service that\n enables companies of any size to deliver superior customer service at a lower cost.\n Amazon Connect communications capabilities make it easy for companies to deliver\n personalized interactions across communication channels, including chat.
\n Use the Amazon Connect Participant Service to manage participants (for example,\n agents, customers, and managers listening in), and to send messages and events within a\n chat contact. The APIs in the service enable the following: sending chat messages,\n attachment sharing, managing a participant's connection state and message events, and\n retrieving chat transcripts.
",
+ "smithy.api#documentation": "\n Amazon Connect is an easy-to-use omnichannel cloud contact center service that\n enables companies of any size to deliver superior customer service at a lower cost.\n Amazon Connect communications capabilities make it easy for companies to deliver\n personalized interactions across communication channels, including chat.
\n Use the Amazon Connect Participant Service to manage participants (for example,\n agents, customers, and managers listening in), and to send messages and events within a\n chat contact. The APIs in the service enable the following: sending chat messages,\n attachment sharing, managing a participant's connection state and message events, and\n retrieving chat transcripts.
",
"smithy.api#title": "Amazon Connect Participant Service",
"smithy.rules#endpointRuleSet": {
"version": "1.0",
@@ -875,9 +881,79 @@
"target": "com.amazonaws.connectparticipant#AttachmentItem"
}
},
+ "com.amazonaws.connectparticipant#AuthenticationUrl": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 2083
+ }
+ }
+ },
"com.amazonaws.connectparticipant#Bool": {
"type": "boolean"
},
+ "com.amazonaws.connectparticipant#CancelParticipantAuthentication": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.connectparticipant#CancelParticipantAuthenticationRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.connectparticipant#CancelParticipantAuthenticationResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.connectparticipant#AccessDeniedException"
+ },
+ {
+ "target": "com.amazonaws.connectparticipant#InternalServerException"
+ },
+ {
+ "target": "com.amazonaws.connectparticipant#ThrottlingException"
+ },
+ {
+ "target": "com.amazonaws.connectparticipant#ValidationException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Cancels the authentication session. The opted out branch of the Authenticate Customer\n flow block will be taken.
\n \n The current supported channel is chat. This API is not supported for Apple\n Messages for Business, WhatsApp, or SMS chats.
\n ",
+ "smithy.api#http": {
+ "method": "POST",
+ "uri": "/participant/cancel-authentication",
+ "code": 200
+ }
+ }
+ },
+ "com.amazonaws.connectparticipant#CancelParticipantAuthenticationRequest": {
+ "type": "structure",
+ "members": {
+ "SessionId": {
+ "target": "com.amazonaws.connectparticipant#SessionId",
+ "traits": {
+ "smithy.api#documentation": "The sessionId
provided in the authenticationInitiated
\n event.
",
+ "smithy.api#required": {}
+ }
+ },
+ "ConnectionToken": {
+ "target": "com.amazonaws.connectparticipant#ParticipantToken",
+ "traits": {
+ "smithy.api#documentation": "The authentication token associated with the participant's connection.
",
+ "smithy.api#httpHeader": "X-Amz-Bearer",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.connectparticipant#CancelParticipantAuthenticationResponse": {
+ "type": "structure",
+ "members": {},
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.connectparticipant#ChatContent": {
"type": "string",
"traits": {
@@ -1020,7 +1096,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Allows you to confirm that the attachment has been uploaded using the pre-signed URL\n provided in StartAttachmentUpload API. A conflict exception is thrown when an attachment\n with that identifier is already being uploaded.
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
+ "smithy.api#documentation": "Allows you to confirm that the attachment has been uploaded using the pre-signed URL\n provided in StartAttachmentUpload API. A conflict exception is thrown when an attachment\n with that identifier is already being uploaded.
\n For security recommendations, see Amazon Connect Chat security best practices.
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
"smithy.api#http": {
"method": "POST",
"uri": "/participant/complete-attachment-upload",
@@ -1077,7 +1153,7 @@
}
},
"traits": {
- "smithy.api#documentation": "The requested operation conflicts with the current state of a service\n resource associated with the request.
",
+ "smithy.api#documentation": "The requested operation conflicts with the current state of a service resource\n associated with the request.
",
"smithy.api#error": "client",
"smithy.api#httpError": 409
}
@@ -1171,7 +1247,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Creates the participant's connection.
\n \n \n ParticipantToken
is used for invoking this API instead of\n ConnectionToken
.
\n \n The participant token is valid for the lifetime of the participant – until they are\n part of a contact.
\n The response URL for WEBSOCKET
Type has a connect expiry timeout of 100s.\n Clients must manually connect to the returned websocket URL and subscribe to the desired\n topic.
\n For chat, you need to publish the following on the established websocket\n connection:
\n \n {\"topic\":\"aws/subscribe\",\"content\":{\"topics\":[\"aws/chat\"]}}
\n
\n Upon websocket URL expiry, as specified in the response ConnectionExpiry parameter,\n clients need to call this API again to obtain a new websocket URL and perform the same\n steps as before.
\n \n Message streaming support: This API can also be used\n together with the StartContactStreaming API to create a participant connection for chat\n contacts that are not using a websocket. For more information about message streaming,\n Enable real-time chat\n message streaming in the Amazon Connect Administrator\n Guide.
\n \n Feature specifications: For information about feature\n specifications, such as the allowed number of open websocket connections per\n participant, see Feature specifications in the Amazon Connect Administrator\n Guide.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
\n ",
+ "smithy.api#documentation": "Creates the participant's connection.
\n For security recommendations, see Amazon Connect Chat security best practices.
\n \n \n ParticipantToken
is used for invoking this API instead of\n ConnectionToken
.
\n \n The participant token is valid for the lifetime of the participant – until they are\n part of a contact.
\n The response URL for WEBSOCKET
Type has a connect expiry timeout of 100s.\n Clients must manually connect to the returned websocket URL and subscribe to the desired\n topic.
\n For chat, you need to publish the following on the established websocket\n connection:
\n \n {\"topic\":\"aws/subscribe\",\"content\":{\"topics\":[\"aws/chat\"]}}
\n
\n Upon websocket URL expiry, as specified in the response ConnectionExpiry parameter,\n clients need to call this API again to obtain a new websocket URL and perform the same\n steps as before.
\n \n Message streaming support: This API can also be used\n together with the StartContactStreaming API to create a participant connection for chat\n contacts that are not using a websocket. For more information about message streaming,\n Enable real-time chat\n message streaming in the Amazon Connect Administrator\n Guide.
\n \n Feature specifications: For information about feature\n specifications, such as the allowed number of open websocket connections per\n participant, see Feature specifications in the Amazon Connect Administrator\n Guide.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
\n ",
"smithy.api#http": {
"method": "POST",
"uri": "/participant/connection",
@@ -1253,7 +1329,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Retrieves the view for the specified view token.
",
+ "smithy.api#documentation": "Retrieves the view for the specified view token.
\n For security recommendations, see Amazon Connect Chat security best practices.
",
"smithy.api#http": {
"method": "GET",
"uri": "/participant/views/{ViewToken}",
@@ -1322,7 +1398,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Disconnects a participant.
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
+ "smithy.api#documentation": "Disconnects a participant.
\n For security recommendations, see Amazon Connect Chat security best practices.
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
"smithy.api#http": {
"method": "POST",
"uri": "/participant/disconnect",
@@ -1392,7 +1468,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Provides a pre-signed URL for download of a completed attachment. This is an\n asynchronous API for use with active contacts.
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
+ "smithy.api#documentation": "Provides a pre-signed URL for download of a completed attachment. This is an\n asynchronous API for use with active contacts.
\n For security recommendations, see Amazon Connect Chat security best practices.
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
"smithy.api#http": {
"method": "POST",
"uri": "/participant/attachment",
@@ -1417,6 +1493,12 @@
"smithy.api#httpHeader": "X-Amz-Bearer",
"smithy.api#required": {}
}
+ },
+ "UrlExpiryInSeconds": {
+ "target": "com.amazonaws.connectparticipant#URLExpiryInSeconds",
+ "traits": {
+ "smithy.api#documentation": "The expiration time of the URL in ISO timestamp. It's specified in ISO 8601 format:\n yyyy-MM-ddThh:mm:ss.SSSZ. For example, 2019-11-08T02:41:28.172Z.
"
+ }
}
},
"traits": {
@@ -1437,6 +1519,89 @@
"traits": {
"smithy.api#documentation": "The expiration time of the URL in ISO timestamp. It's specified in ISO 8601 format: yyyy-MM-ddThh:mm:ss.SSSZ. For example, 2019-11-08T02:41:28.172Z.
"
}
+ },
+ "AttachmentSizeInBytes": {
+ "target": "com.amazonaws.connectparticipant#AttachmentSizeInBytes",
+ "traits": {
+ "smithy.api#default": null,
+ "smithy.api#documentation": "The size of the attachment in bytes.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.connectparticipant#GetAuthenticationUrl": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.connectparticipant#GetAuthenticationUrlRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.connectparticipant#GetAuthenticationUrlResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.connectparticipant#AccessDeniedException"
+ },
+ {
+ "target": "com.amazonaws.connectparticipant#InternalServerException"
+ },
+ {
+ "target": "com.amazonaws.connectparticipant#ThrottlingException"
+ },
+ {
+ "target": "com.amazonaws.connectparticipant#ValidationException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Retrieves the AuthenticationUrl for the current authentication session for the\n AuthenticateCustomer flow block.
\n For security recommendations, see Amazon Connect Chat security best practices.
\n \n \n - \n
This API can only be called within one minute of receiving the\n authenticationInitiated event.
\n \n - \n
The current supported channel is chat. This API is not supported for Apple\n Messages for Business, WhatsApp, or SMS chats.
\n \n
\n ",
+ "smithy.api#http": {
+ "method": "POST",
+ "uri": "/participant/authentication-url",
+ "code": 200
+ }
+ }
+ },
+ "com.amazonaws.connectparticipant#GetAuthenticationUrlRequest": {
+ "type": "structure",
+ "members": {
+ "SessionId": {
+ "target": "com.amazonaws.connectparticipant#SessionId",
+ "traits": {
+ "smithy.api#documentation": "The sessionId provided in the authenticationInitiated event.
",
+ "smithy.api#required": {}
+ }
+ },
+ "RedirectUri": {
+ "target": "com.amazonaws.connectparticipant#RedirectURI",
+ "traits": {
+ "smithy.api#documentation": "The URL where the customer will be redirected after Amazon Cognito authorizes the\n user.
",
+ "smithy.api#required": {}
+ }
+ },
+ "ConnectionToken": {
+ "target": "com.amazonaws.connectparticipant#ParticipantToken",
+ "traits": {
+ "smithy.api#documentation": "The authentication token associated with the participant's connection.
",
+ "smithy.api#httpHeader": "X-Amz-Bearer",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.connectparticipant#GetAuthenticationUrlResponse": {
+ "type": "structure",
+ "members": {
+ "AuthenticationUrl": {
+ "target": "com.amazonaws.connectparticipant#AuthenticationUrl",
+ "traits": {
+ "smithy.api#documentation": "The URL where the customer will sign in to the identity provider. This URL contains\n the authorize endpoint for the Cognito UserPool used in the authentication.
"
+ }
}
},
"traits": {
@@ -1466,7 +1631,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Retrieves a transcript of the session, including details about any attachments. For\n information about accessing past chat contact transcripts for a persistent chat, see\n Enable persistent chat.
\n If you have a process that consumes events in the transcript of an chat that has ended, note that chat\n transcripts contain the following event content types if the event has occurred\n during the chat session:
\n \n - \n
\n application/vnd.amazonaws.connect.event.participant.left
\n
\n \n - \n
\n application/vnd.amazonaws.connect.event.participant.joined
\n
\n \n - \n
\n application/vnd.amazonaws.connect.event.chat.ended
\n
\n \n - \n
\n application/vnd.amazonaws.connect.event.transfer.succeeded
\n
\n \n - \n
\n application/vnd.amazonaws.connect.event.transfer.failed
\n
\n \n
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
+ "smithy.api#documentation": "Retrieves a transcript of the session, including details about any attachments. For\n information about accessing past chat contact transcripts for a persistent chat, see\n Enable persistent chat.
\n For security recommendations, see Amazon Connect Chat security best practices.
\n If you have a process that consumes events in the transcript of an chat that has\n ended, note that chat transcripts contain the following event content types if the event\n has occurred during the chat session:
\n \n - \n
\n application/vnd.amazonaws.connect.event.participant.left
\n
\n \n - \n
\n application/vnd.amazonaws.connect.event.participant.joined
\n
\n \n - \n
\n application/vnd.amazonaws.connect.event.chat.ended
\n
\n \n - \n
\n application/vnd.amazonaws.connect.event.transfer.succeeded
\n
\n \n - \n
\n application/vnd.amazonaws.connect.event.transfer.failed
\n
\n \n
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
"smithy.api#http": {
"method": "POST",
"uri": "/participant/transcript",
@@ -1839,6 +2004,15 @@
"target": "com.amazonaws.connectparticipant#Receipt"
}
},
+ "com.amazonaws.connectparticipant#RedirectURI": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 1024
+ }
+ }
+ },
"com.amazonaws.connectparticipant#ResourceId": {
"type": "string"
},
@@ -1963,7 +2137,7 @@
}
],
"traits": {
- "smithy.api#documentation": "\n The application/vnd.amazonaws.connect.event.connection.acknowledged
\n ContentType will no longer be supported starting December 31, 2024. This event has\n been migrated to the CreateParticipantConnection API using the\n ConnectParticipant
field.
\n \n Sends an event. Message receipts are not supported when there are more than two active\n participants in the chat. Using the SendEvent API for message receipts when a supervisor\n is barged-in will result in a conflict exception.
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
+ "smithy.api#documentation": "\n The application/vnd.amazonaws.connect.event.connection.acknowledged
\n ContentType will no longer be supported starting December 31, 2024. This event has\n been migrated to the CreateParticipantConnection API using the\n ConnectParticipant
field.
\n \n Sends an event. Message receipts are not supported when there are more than two active\n participants in the chat. Using the SendEvent API for message receipts when a supervisor\n is barged-in will result in a conflict exception.
\n For security recommendations, see Amazon Connect Chat security best practices.
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
"smithy.api#http": {
"method": "POST",
"uri": "/participant/event",
@@ -2050,7 +2224,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Sends a message.
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
+ "smithy.api#documentation": "Sends a message.
\n For security recommendations, see Amazon Connect Chat security best practices.
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
"smithy.api#http": {
"method": "POST",
"uri": "/participant/message",
@@ -2131,6 +2305,15 @@
"smithy.api#httpError": 402
}
},
+ "com.amazonaws.connectparticipant#SessionId": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 36,
+ "max": 36
+ }
+ }
+ },
"com.amazonaws.connectparticipant#SortKey": {
"type": "enum",
"members": {
@@ -2174,7 +2357,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Provides a pre-signed Amazon S3 URL in response for uploading the file directly to\n S3.
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
+ "smithy.api#documentation": "Provides a pre-signed Amazon S3 URL in response for uploading the file directly to\n S3.
\n For security recommendations, see Amazon Connect Chat security best practices.
\n \n \n ConnectionToken
is used for invoking this API instead of\n ParticipantToken
.
\n \n The Amazon Connect Participant Service APIs do not use Signature Version 4\n authentication.
",
"smithy.api#http": {
"method": "POST",
"uri": "/participant/start-attachment-upload",
@@ -2240,7 +2423,7 @@
"UploadMetadata": {
"target": "com.amazonaws.connectparticipant#UploadMetadata",
"traits": {
- "smithy.api#documentation": "Fields to be used while uploading the attachment.
"
+ "smithy.api#documentation": "The headers to be provided while uploading the file to the URL.
"
}
}
},
@@ -2297,6 +2480,15 @@
"target": "com.amazonaws.connectparticipant#Item"
}
},
+ "com.amazonaws.connectparticipant#URLExpiryInSeconds": {
+ "type": "integer",
+ "traits": {
+ "smithy.api#range": {
+ "min": 5,
+ "max": 300
+ }
+ }
+ },
"com.amazonaws.connectparticipant#UploadMetadata": {
"type": "structure",
"members": {
diff --git a/codegen/sdk-codegen/aws-models/database-migration-service.json b/codegen/sdk-codegen/aws-models/database-migration-service.json
index 05e72af77b5..dd96ca60093 100644
--- a/codegen/sdk-codegen/aws-models/database-migration-service.json
+++ b/codegen/sdk-codegen/aws-models/database-migration-service.json
@@ -3697,7 +3697,7 @@
}
},
"ReplicationInstanceClass": {
- "target": "com.amazonaws.databasemigrationservice#String",
+ "target": "com.amazonaws.databasemigrationservice#ReplicationInstanceClass",
"traits": {
"smithy.api#documentation": "The compute and memory capacity of the replication instance as defined for the specified\n replication instance class. For example to specify the instance class dms.c4.large, set this parameter to \"dms.c4.large\"
.
\n For more information on the settings and capacities for the available replication instance classes, see \n \n Choosing the right DMS replication instance; and, \n Selecting the best size for a replication instance.\n
",
"smithy.api#required": {}
@@ -3780,6 +3780,12 @@
"traits": {
"smithy.api#documentation": "The type of IP address protocol used by a replication instance, \n such as IPv4 only or Dual-stack that supports both IPv4 and IPv6 addressing. \n IPv6 only is not yet supported.
"
}
+ },
+ "KerberosAuthenticationSettings": {
+ "target": "com.amazonaws.databasemigrationservice#KerberosAuthenticationSettings",
+ "traits": {
+ "smithy.api#documentation": "Specifies the ID of the secret that stores the key cache file required for kerberos authentication, when creating a replication instance.
"
+ }
}
},
"traits": {
@@ -5071,6 +5077,9 @@
"target": "com.amazonaws.databasemigrationservice#DeleteEventSubscriptionResponse"
},
"errors": [
+ {
+ "target": "com.amazonaws.databasemigrationservice#AccessDeniedFault"
+ },
{
"target": "com.amazonaws.databasemigrationservice#InvalidResourceStateFault"
},
@@ -5533,6 +5542,9 @@
"target": "com.amazonaws.databasemigrationservice#DeleteReplicationSubnetGroupResponse"
},
"errors": [
+ {
+ "target": "com.amazonaws.databasemigrationservice#AccessDeniedFault"
+ },
{
"target": "com.amazonaws.databasemigrationservice#InvalidResourceStateFault"
},
@@ -6315,7 +6327,7 @@
"Filters": {
"target": "com.amazonaws.databasemigrationservice#FilterList",
"traits": {
- "smithy.api#documentation": "Filters applied to the data providers described in the form of key-value pairs.
\n Valid filter names: data-provider-identifier
"
+ "smithy.api#documentation": "Filters applied to the data providers described in the form of key-value pairs.
\n Valid filter names and values: data-provider-identifier, data provider arn or name
"
}
},
"MaxRecords": {
@@ -7428,7 +7440,7 @@
"Filters": {
"target": "com.amazonaws.databasemigrationservice#FilterList",
"traits": {
- "smithy.api#documentation": "Filters applied to the instance profiles described in the form of key-value pairs.
"
+ "smithy.api#documentation": "Filters applied to the instance profiles described in the form of key-value pairs.
\n Valid filter names and values: instance-profile-identifier, instance profile arn or name
"
}
},
"MaxRecords": {
@@ -8072,7 +8084,7 @@
"Filters": {
"target": "com.amazonaws.databasemigrationservice#FilterList",
"traits": {
- "smithy.api#documentation": "Filters applied to the migration projects described in the form of key-value pairs.
"
+ "smithy.api#documentation": "Filters applied to the migration projects described in the form of key-value pairs.
\n Valid filter names and values:
\n \n - \n
instance-profile-identifier, instance profile arn or name
\n \n - \n
data-provider-identifier, data provider arn or name
\n \n - \n
migration-project-identifier, migration project arn or name
\n \n
"
}
},
"MaxRecords": {
@@ -9791,6 +9803,9 @@
"target": "com.amazonaws.databasemigrationservice#DescribeTableStatisticsResponse"
},
"errors": [
+ {
+ "target": "com.amazonaws.databasemigrationservice#AccessDeniedFault"
+ },
{
"target": "com.amazonaws.databasemigrationservice#InvalidResourceStateFault"
},
@@ -11717,6 +11732,12 @@
"traits": {
"smithy.api#documentation": "Sets hostname verification\n for the certificate. This setting is supported in DMS version 3.5.1 and later.
"
}
+ },
+ "UseLargeIntegerValue": {
+ "target": "com.amazonaws.databasemigrationservice#BooleanOptional",
+ "traits": {
+ "smithy.api#documentation": "Specifies using the large integer value with Kafka.
"
+ }
}
},
"traits": {
@@ -11740,6 +11761,32 @@
}
}
},
+ "com.amazonaws.databasemigrationservice#KerberosAuthenticationSettings": {
+ "type": "structure",
+ "members": {
+ "KeyCacheSecretId": {
+ "target": "com.amazonaws.databasemigrationservice#String",
+ "traits": {
+ "smithy.api#documentation": "Specifies the secret ID of the key cache for the replication instance.
"
+ }
+ },
+ "KeyCacheSecretIamArn": {
+ "target": "com.amazonaws.databasemigrationservice#String",
+ "traits": {
+ "smithy.api#documentation": "Specifies the Amazon Resource Name (ARN) of the IAM role that grants Amazon Web Services DMS access to the secret containing key cache file for the replication instance.
"
+ }
+ },
+ "Krb5FileContents": {
+ "target": "com.amazonaws.databasemigrationservice#String",
+ "traits": {
+ "smithy.api#documentation": "Specifies the ID of the secret that stores the key cache file required for kerberos authentication of the replication instance.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Specifies using Kerberos authentication settings for use with DMS.
"
+ }
+ },
"com.amazonaws.databasemigrationservice#KeyList": {
"type": "list",
"member": {
@@ -11808,6 +11855,12 @@
"traits": {
"smithy.api#documentation": "Set this optional parameter to true
to avoid adding a '0x' prefix\n to raw data in hexadecimal format. For example, by default, DMS adds a '0x'\n prefix to the LOB column type in hexadecimal format moving from an Oracle source to an\n Amazon Kinesis target. Use the NoHexPrefix
endpoint setting to enable\n migration of RAW data type columns without adding the '0x' prefix.
"
}
+ },
+ "UseLargeIntegerValue": {
+ "target": "com.amazonaws.databasemigrationservice#BooleanOptional",
+ "traits": {
+ "smithy.api#documentation": "Specifies using the large integer value with Kinesis.
"
+ }
}
},
"traits": {
@@ -12126,6 +12179,12 @@
"traits": {
"smithy.api#documentation": "Forces LOB lookup on inline LOB.
"
}
+ },
+ "AuthenticationMethod": {
+ "target": "com.amazonaws.databasemigrationservice#SqlServerAuthenticationMethod",
+ "traits": {
+ "smithy.api#documentation": "Specifies using Kerberos authentication with Microsoft SQL Server.
"
+ }
}
},
"traits": {
@@ -12841,6 +12900,9 @@
"target": "com.amazonaws.databasemigrationservice#ModifyEventSubscriptionResponse"
},
"errors": [
+ {
+ "target": "com.amazonaws.databasemigrationservice#AccessDeniedFault"
+ },
{
"target": "com.amazonaws.databasemigrationservice#KMSAccessDeniedFault"
},
@@ -13472,7 +13534,7 @@
}
},
"ReplicationInstanceClass": {
- "target": "com.amazonaws.databasemigrationservice#String",
+ "target": "com.amazonaws.databasemigrationservice#ReplicationInstanceClass",
"traits": {
"smithy.api#documentation": "The compute and memory capacity of the replication instance as defined for the specified\n replication instance class. For example to specify the instance class dms.c4.large, set this parameter to \"dms.c4.large\"
.
\n For more information on the settings and capacities for the available replication instance classes, see \n \n Selecting the right DMS replication instance for your migration.\n
"
}
@@ -13525,6 +13587,12 @@
"traits": {
"smithy.api#documentation": "The type of IP address protocol used by a replication instance, \n such as IPv4 only or Dual-stack that supports both IPv4 and IPv6 addressing. \n IPv6 only is not yet supported.
"
}
+ },
+ "KerberosAuthenticationSettings": {
+ "target": "com.amazonaws.databasemigrationservice#KerberosAuthenticationSettings",
+ "traits": {
+ "smithy.api#documentation": "Specifies the ID of the secret that stores the key cache file required for kerberos authentication, when modifying a replication instance.
"
+ }
}
},
"traits": {
@@ -14168,6 +14236,23 @@
}
}
},
+ "com.amazonaws.databasemigrationservice#OracleAuthenticationMethod": {
+ "type": "enum",
+ "members": {
+ "Password": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "password"
+ }
+ },
+ "Kerberos": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "kerberos"
+ }
+ }
+ }
+ },
"com.amazonaws.databasemigrationservice#OracleDataProviderSettings": {
"type": "structure",
"members": {
@@ -14326,7 +14411,7 @@
"ArchivedLogsOnly": {
"target": "com.amazonaws.databasemigrationservice#BooleanOptional",
"traits": {
- "smithy.api#documentation": "When this field is set to Y
, DMS only accesses the\n archived redo logs. If the archived redo logs are stored on\n Automatic Storage Management (ASM) only, the DMS user account needs to be\n granted ASM privileges.
"
+ "smithy.api#documentation": "When this field is set to True
, DMS only accesses the\n archived redo logs. If the archived redo logs are stored on\n Automatic Storage Management (ASM) only, the DMS user account needs to be\n granted ASM privileges.
"
}
},
"AsmPassword": {
@@ -14440,19 +14525,19 @@
"UseBFile": {
"target": "com.amazonaws.databasemigrationservice#BooleanOptional",
"traits": {
- "smithy.api#documentation": "Set this attribute to Y to capture change data using the Binary Reader utility. Set\n UseLogminerReader
to N to set this attribute to Y. To use Binary Reader\n with Amazon RDS for Oracle as the source, you set additional attributes. For more information\n about using this setting with Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or DMS Binary Reader for\n CDC.
"
+ "smithy.api#documentation": "Set this attribute to True to capture change data using the Binary Reader utility. Set\n UseLogminerReader
to False to set this attribute to True. To use Binary Reader\n with Amazon RDS for Oracle as the source, you set additional attributes. For more information\n about using this setting with Oracle Automatic Storage Management (ASM), see Using Oracle LogMiner or DMS Binary Reader for\n CDC.
"
}
},
"UseDirectPathFullLoad": {
"target": "com.amazonaws.databasemigrationservice#BooleanOptional",
"traits": {
- "smithy.api#documentation": "Set this attribute to Y to have DMS use a direct path full load. \n Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). \n By using this OCI protocol, you can bulk-load Oracle target tables during a full load.
"
+ "smithy.api#documentation": "Set this attribute to True to have DMS use a direct path full load. \n Specify this value to use the direct path protocol in the Oracle Call Interface (OCI). \n By using this OCI protocol, you can bulk-load Oracle target tables during a full load.
"
}
},
"UseLogminerReader": {
"target": "com.amazonaws.databasemigrationservice#BooleanOptional",
"traits": {
- "smithy.api#documentation": "Set this attribute to Y to capture change data using the Oracle LogMiner utility (the\n default). Set this attribute to N if you want to access the redo logs as a binary file.\n When you set UseLogminerReader
to N, also set UseBfile
to Y. For\n more information on this setting and using Oracle ASM, see Using Oracle LogMiner or DMS Binary Reader for CDC in\n the DMS User Guide.
"
+ "smithy.api#documentation": "Set this attribute to True to capture change data using the Oracle LogMiner utility (the\n default). Set this attribute to False if you want to access the redo logs as a binary file.\n When you set UseLogminerReader
to False, also set UseBfile
to True. For\n more information on this setting and using Oracle ASM, see Using Oracle LogMiner or DMS Binary Reader for CDC in\n the DMS User Guide.
"
}
},
"SecretsManagerAccessRoleArn": {
@@ -14494,7 +14579,13 @@
"OpenTransactionWindow": {
"target": "com.amazonaws.databasemigrationservice#IntegerOptional",
"traits": {
- "smithy.api#documentation": "The timeframe in minutes to check for open transactions for a CDC-only task.
\n You can\n specify an integer value between 0 (the default) and 240 (the maximum).
\n \n This parameter is only valid in DMS version 3.5.0 and later. DMS supports\n a window of up to 9.5 hours including the value for OpenTransactionWindow
.
\n "
+ "smithy.api#documentation": "The timeframe in minutes to check for open transactions for a CDC-only task.
\n You can\n specify an integer value between 0 (the default) and 240 (the maximum).
\n \n This parameter is only valid in DMS version 3.5.0 and later.
\n "
+ }
+ },
+ "AuthenticationMethod": {
+ "target": "com.amazonaws.databasemigrationservice#OracleAuthenticationMethod",
+ "traits": {
+ "smithy.api#documentation": "Specifies using Kerberos authentication with Oracle.
"
}
}
},
@@ -14512,7 +14603,7 @@
}
},
"ReplicationInstanceClass": {
- "target": "com.amazonaws.databasemigrationservice#String",
+ "target": "com.amazonaws.databasemigrationservice#ReplicationInstanceClass",
"traits": {
"smithy.api#documentation": "The compute and memory capacity of the replication instance as defined for the specified\n replication instance class. For example to specify the instance class dms.c4.large, set this parameter to \"dms.c4.large\"
.
\n For more information on the settings and capacities for the available replication instance classes, see \n \n Selecting the right DMS replication instance for your migration.\n
"
}
@@ -14708,13 +14799,13 @@
"CaptureDdls": {
"target": "com.amazonaws.databasemigrationservice#BooleanOptional",
"traits": {
- "smithy.api#documentation": "To capture DDL events, DMS creates various artifacts in\n the PostgreSQL database when the task starts. You can later\n remove these artifacts.
\n If this value is set to N
, you don't have to create tables or\n triggers on the source database.
"
+ "smithy.api#documentation": "To capture DDL events, DMS creates various artifacts in\n the PostgreSQL database when the task starts. You can later\n remove these artifacts.
\n The default value is true
.
\n If this value is set to N
, you don't have to create tables or\n triggers on the source database.
"
}
},
"MaxFileSize": {
"target": "com.amazonaws.databasemigrationservice#IntegerOptional",
"traits": {
- "smithy.api#documentation": "Specifies the maximum size (in KB) of any .csv file used to\n transfer data to PostgreSQL.
\n Example: maxFileSize=512
\n
"
+ "smithy.api#documentation": "Specifies the maximum size (in KB) of any .csv file used to\n transfer data to PostgreSQL.
\n The default value is 32,768 KB (32 MB).
\n Example: maxFileSize=512
\n
"
}
},
"DatabaseName": {
@@ -14726,7 +14817,7 @@
"DdlArtifactsSchema": {
"target": "com.amazonaws.databasemigrationservice#String",
"traits": {
- "smithy.api#documentation": "The schema in which the operational DDL database artifacts\n are created.
\n Example: ddlArtifactsSchema=xyzddlschema;
\n
"
+ "smithy.api#documentation": "The schema in which the operational DDL database artifacts\n are created.
\n The default value is public
.
\n Example: ddlArtifactsSchema=xyzddlschema;
\n
"
}
},
"ExecuteTimeout": {
@@ -14738,25 +14829,25 @@
"FailTasksOnLobTruncation": {
"target": "com.amazonaws.databasemigrationservice#BooleanOptional",
"traits": {
- "smithy.api#documentation": "When set to true
, this value causes a task to fail if the\n actual size of a LOB column is greater than the specified\n LobMaxSize
.
\n If task is set to Limited LOB mode and this option is set to\n true, the task fails instead of truncating the LOB data.
"
+ "smithy.api#documentation": "When set to true
, this value causes a task to fail if the\n actual size of a LOB column is greater than the specified\n LobMaxSize
.
\n The default value is false
.
\n If task is set to Limited LOB mode and this option is set to\n true, the task fails instead of truncating the LOB data.
"
}
},
"HeartbeatEnable": {
"target": "com.amazonaws.databasemigrationservice#BooleanOptional",
"traits": {
- "smithy.api#documentation": "The write-ahead log (WAL) heartbeat feature mimics a dummy transaction. By doing this,\n it prevents idle logical replication slots from holding onto old WAL logs, which can result in\n storage full situations on the source. This heartbeat keeps restart_lsn
moving\n and prevents storage full scenarios.
"
+ "smithy.api#documentation": "The write-ahead log (WAL) heartbeat feature mimics a dummy transaction. By doing this,\n it prevents idle logical replication slots from holding onto old WAL logs, which can result in\n storage full situations on the source. This heartbeat keeps restart_lsn
moving\n and prevents storage full scenarios.
\n The default value is false
.
"
}
},
"HeartbeatSchema": {
"target": "com.amazonaws.databasemigrationservice#String",
"traits": {
- "smithy.api#documentation": "Sets the schema in which the heartbeat artifacts are created.
"
+ "smithy.api#documentation": "Sets the schema in which the heartbeat artifacts are created.
\n The default value is public
.
"
}
},
"HeartbeatFrequency": {
"target": "com.amazonaws.databasemigrationservice#IntegerOptional",
"traits": {
- "smithy.api#documentation": "Sets the WAL heartbeat frequency (in minutes).
"
+ "smithy.api#documentation": "Sets the WAL heartbeat frequency (in minutes).
\n The default value is 5 minutes.
"
}
},
"Password": {
@@ -14792,7 +14883,7 @@
"PluginName": {
"target": "com.amazonaws.databasemigrationservice#PluginNameValue",
"traits": {
- "smithy.api#documentation": "Specifies the plugin to use to create a replication slot.
"
+ "smithy.api#documentation": "Specifies the plugin to use to create a replication slot.
\n The default value is pglogical
.
"
}
},
"SecretsManagerAccessRoleArn": {
@@ -14816,19 +14907,19 @@
"MapBooleanAsBoolean": {
"target": "com.amazonaws.databasemigrationservice#BooleanOptional",
"traits": {
- "smithy.api#documentation": "When true, lets PostgreSQL migrate the boolean type as boolean. By default, PostgreSQL migrates booleans as \n varchar(5)
. You must set this setting on both the source and target endpoints for it to take effect.
"
+ "smithy.api#documentation": "When true, lets PostgreSQL migrate the boolean type as boolean. By default, PostgreSQL migrates booleans as \n varchar(5)
. You must set this setting on both the source and target endpoints for it to take effect.
\n The default value is false
.
"
}
},
"MapJsonbAsClob": {
"target": "com.amazonaws.databasemigrationservice#BooleanOptional",
"traits": {
- "smithy.api#documentation": "When true, DMS migrates JSONB values as CLOB.
"
+ "smithy.api#documentation": "When true, DMS migrates JSONB values as CLOB.
\n The default value is false
.
"
}
},
"MapLongVarcharAs": {
"target": "com.amazonaws.databasemigrationservice#LongVarcharMappingType",
"traits": {
- "smithy.api#documentation": "When true, DMS migrates LONG values as VARCHAR.
"
+ "smithy.api#documentation": "Sets what datatype to map LONG values as.
\n The default value is wstring
.
"
}
},
"DatabaseMode": {
@@ -14842,6 +14933,12 @@
"traits": {
"smithy.api#documentation": "The Babelfish for Aurora PostgreSQL database name for the endpoint.
"
}
+ },
+ "DisableUnicodeSourceFilter": {
+ "target": "com.amazonaws.databasemigrationservice#BooleanOptional",
+ "traits": {
+ "smithy.api#documentation": "Disables the Unicode source filter with PostgreSQL, for values passed into the Selection rule filter on Source Endpoint column values. \n By default DMS performs source filter comparisons using a Unicode string which can cause look ups to ignore the indexes in the text columns and slow down migrations.
\n Unicode support should only be disabled when using a selection rule filter is on a text column in the Source database that is indexed.
"
+ }
}
},
"traits": {
@@ -15948,7 +16045,7 @@
"StartReplicationType": {
"target": "com.amazonaws.databasemigrationservice#String",
"traits": {
- "smithy.api#documentation": "The replication type.
"
+ "smithy.api#documentation": "The type of replication to start.
"
}
},
"CdcStartTime": {
@@ -16114,7 +16211,7 @@
}
},
"ReplicationInstanceClass": {
- "target": "com.amazonaws.databasemigrationservice#String",
+ "target": "com.amazonaws.databasemigrationservice#ReplicationInstanceClass",
"traits": {
"smithy.api#documentation": "The compute and memory capacity of the replication instance as defined for the specified\n replication instance class. It is a required parameter, although a default value is\n pre-selected in the DMS console.
\n For more information on the settings and capacities for the available replication instance classes, see \n \n Selecting the right DMS replication instance for your migration.\n
"
}
@@ -16262,12 +16359,27 @@
"traits": {
"smithy.api#documentation": "The type of IP address protocol used by a replication instance, \n such as IPv4 only or Dual-stack that supports both IPv4 and IPv6 addressing. \n IPv6 only is not yet supported.
"
}
+ },
+ "KerberosAuthenticationSettings": {
+ "target": "com.amazonaws.databasemigrationservice#KerberosAuthenticationSettings",
+ "traits": {
+ "smithy.api#documentation": "Specifies the ID of the secret that stores the key cache file required for kerberos authentication, when replicating an instance.
"
+ }
}
},
"traits": {
"smithy.api#documentation": "Provides information that defines a replication instance.
"
}
},
+ "com.amazonaws.databasemigrationservice#ReplicationInstanceClass": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 30
+ }
+ }
+ },
"com.amazonaws.databasemigrationservice#ReplicationInstanceIpv6AddressList": {
"type": "list",
"member": {
@@ -16341,7 +16453,7 @@
"type": "structure",
"members": {
"ReplicationInstanceClass": {
- "target": "com.amazonaws.databasemigrationservice#String",
+ "target": "com.amazonaws.databasemigrationservice#ReplicationInstanceClass",
"traits": {
"smithy.api#documentation": "The compute and memory capacity of the replication instance as defined for the specified\n replication instance class.
\n For more information on the settings and capacities for the available replication instance classes, see \n \n Selecting the right DMS replication instance for your migration.\n
"
}
@@ -16589,7 +16701,7 @@
"StopReason": {
"target": "com.amazonaws.databasemigrationservice#String",
"traits": {
- "smithy.api#documentation": "The reason the replication task was stopped. This response parameter can return one of\n the following values:
\n \n - \n
\n \"Stop Reason NORMAL\"
\n
\n \n - \n
\n \"Stop Reason RECOVERABLE_ERROR\"
\n
\n \n - \n
\n \"Stop Reason FATAL_ERROR\"
\n
\n \n - \n
\n \"Stop Reason FULL_LOAD_ONLY_FINISHED\"
\n
\n \n - \n
\n \"Stop Reason STOPPED_AFTER_FULL_LOAD\"
– Full load completed, with cached changes not applied
\n \n - \n
\n \"Stop Reason STOPPED_AFTER_CACHED_EVENTS\"
– Full load completed, with cached changes applied
\n \n - \n
\n \"Stop Reason EXPRESS_LICENSE_LIMITS_REACHED\"
\n
\n \n - \n
\n \"Stop Reason STOPPED_AFTER_DDL_APPLY\"
– User-defined stop task after DDL applied
\n \n - \n
\n \"Stop Reason STOPPED_DUE_TO_LOW_MEMORY\"
\n
\n \n - \n
\n \"Stop Reason STOPPED_DUE_TO_LOW_DISK\"
\n
\n \n - \n
\n \"Stop Reason STOPPED_AT_SERVER_TIME\"
– User-defined server time for stopping task
\n \n - \n
\n \"Stop Reason STOPPED_AT_COMMIT_TIME\"
– User-defined commit time for stopping task
\n \n - \n
\n \"Stop Reason RECONFIGURATION_RESTART\"
\n
\n \n - \n
\n \"Stop Reason RECYCLE_TASK\"
\n
\n \n
"
+ "smithy.api#documentation": "The reason the replication task was stopped. This response parameter can return one of\n the following values:
\n \n - \n
\n \"Stop Reason NORMAL\"
– The task completed successfully with no additional information returned.
\n \n - \n
\n \"Stop Reason RECOVERABLE_ERROR\"
\n
\n \n - \n
\n \"Stop Reason FATAL_ERROR\"
\n
\n \n - \n
\n \"Stop Reason FULL_LOAD_ONLY_FINISHED\"
– The task completed the full load phase.\n DMS applied cached changes if you set StopTaskCachedChangesApplied
to true
.
\n \n - \n
\n \"Stop Reason STOPPED_AFTER_FULL_LOAD\"
– Full load completed, with cached changes not applied
\n \n - \n
\n \"Stop Reason STOPPED_AFTER_CACHED_EVENTS\"
– Full load completed, with cached changes applied
\n \n - \n
\n \"Stop Reason EXPRESS_LICENSE_LIMITS_REACHED\"
\n
\n \n - \n
\n \"Stop Reason STOPPED_AFTER_DDL_APPLY\"
– User-defined stop task after DDL applied
\n \n - \n
\n \"Stop Reason STOPPED_DUE_TO_LOW_MEMORY\"
\n
\n \n - \n
\n \"Stop Reason STOPPED_DUE_TO_LOW_DISK\"
\n
\n \n - \n
\n \"Stop Reason STOPPED_AT_SERVER_TIME\"
– User-defined server time for stopping task
\n \n - \n
\n \"Stop Reason STOPPED_AT_COMMIT_TIME\"
– User-defined commit time for stopping task
\n \n - \n
\n \"Stop Reason RECONFIGURATION_RESTART\"
\n
\n \n - \n
\n \"Stop Reason RECYCLE_TASK\"
\n
\n \n
"
}
},
"ReplicationTaskCreationDate": {
@@ -16691,7 +16803,7 @@
}
},
"S3ObjectUrl": {
- "target": "com.amazonaws.databasemigrationservice#String",
+ "target": "com.amazonaws.databasemigrationservice#SecretString",
"traits": {
"smithy.api#documentation": " The URL of the S3 object containing the task assessment results.
\n The response object only contains this field if you provide DescribeReplicationTaskAssessmentResultsMessage$ReplicationTaskArn\n in the request.
"
}
@@ -16728,7 +16840,7 @@
"Status": {
"target": "com.amazonaws.databasemigrationservice#String",
"traits": {
- "smithy.api#documentation": "Assessment run status.
\n This status can have one of the following values:
\n \n - \n
\n \"cancelling\"
– The assessment run was canceled by the\n CancelReplicationTaskAssessmentRun
operation.
\n \n - \n
\n \"deleting\"
– The assessment run was deleted by the\n DeleteReplicationTaskAssessmentRun
operation.
\n \n - \n
\n \"failed\"
– At least one individual assessment completed with a\n failed
status.
\n \n - \n
\n \"error-provisioning\"
– An internal error occurred while\n resources were provisioned (during provisioning
status).
\n \n - \n
\n \"error-executing\"
– An internal error occurred while\n individual assessments ran (during running
status).
\n \n - \n
\n \"invalid state\"
– The assessment run is in an unknown state.
\n \n - \n
\n \"passed\"
– All individual assessments have completed, and none\n has a failed
status.
\n \n - \n
\n \"provisioning\"
– Resources required to run individual\n assessments are being provisioned.
\n \n - \n
\n \"running\"
– Individual assessments are being run.
\n \n - \n
\n \"starting\"
– The assessment run is starting, but resources are not yet\n being provisioned for individual assessments.
\n \n
"
+ "smithy.api#documentation": "Assessment run status.
\n This status can have one of the following values:
\n \n - \n
\n \"cancelling\"
– The assessment run was canceled by the\n CancelReplicationTaskAssessmentRun
operation.
\n \n - \n
\n \"deleting\"
– The assessment run was deleted by the\n DeleteReplicationTaskAssessmentRun
operation.
\n \n - \n
\n \"failed\"
– At least one individual assessment completed with a\n failed
status.
\n \n - \n
\n \"error-provisioning\"
– An internal error occurred while\n resources were provisioned (during provisioning
status).
\n \n - \n
\n \"error-executing\"
– An internal error occurred while\n individual assessments ran (during running
status).
\n \n - \n
\n \"invalid state\"
– The assessment run is in an unknown state.
\n \n - \n
\n \"passed\"
– All individual assessments have completed, and none\n has a failed
status.
\n \n - \n
\n \"provisioning\"
– Resources required to run individual\n assessments are being provisioned.
\n \n - \n
\n \"running\"
– Individual assessments are being run.
\n \n - \n
\n \"starting\"
– The assessment run is starting, but resources are not yet\n being provisioned for individual assessments.
\n \n - \n
\n \"warning\"
– At least one individual assessment completed with a warning
status.
\n \n
"
}
},
"ReplicationTaskAssessmentRunCreationDate": {
@@ -17703,6 +17815,23 @@
}
}
},
+ "com.amazonaws.databasemigrationservice#SqlServerAuthenticationMethod": {
+ "type": "enum",
+ "members": {
+ "Password": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "password"
+ }
+ },
+ "Kerberos": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "kerberos"
+ }
+ }
+ }
+ },
"com.amazonaws.databasemigrationservice#SslSecurityProtocolValue": {
"type": "enum",
"members": {
@@ -18446,7 +18575,7 @@
"StartReplicationType": {
"target": "com.amazonaws.databasemigrationservice#String",
"traits": {
- "smithy.api#documentation": "The replication type.
",
+ "smithy.api#documentation": "The replication type.
\n When the replication type is full-load
or full-load-and-cdc
, the only valid value \n for the first run of the replication is start-replication
. This option will start the replication.
\n You can also use ReloadTables to reload specific tables that failed during replication instead \n of restarting the replication.
\n The resume-processing
option isn't applicable for a full-load replication,\n because you can't resume partially loaded tables during the full load phase.
\n For a full-load-and-cdc
replication, DMS migrates table data, and then applies data changes \n that occur on the source. To load all the tables again, and start capturing source changes, \n use reload-target
. Otherwise use resume-processing
, to replicate the \n changes from the last stop position.
",
"smithy.api#required": {}
}
},
diff --git a/codegen/sdk-codegen/aws-models/datasync.json b/codegen/sdk-codegen/aws-models/datasync.json
index 4f914e33372..a789c126283 100644
--- a/codegen/sdk-codegen/aws-models/datasync.json
+++ b/codegen/sdk-codegen/aws-models/datasync.json
@@ -626,7 +626,7 @@
"Subdirectory": {
"target": "com.amazonaws.datasync#EfsSubdirectory",
"traits": {
- "smithy.api#documentation": "Specifies a mount path for your Amazon EFS file system. This is where DataSync reads or writes data (depending on if this is a source or destination location)\n on your file system.
\n By default, DataSync uses the root directory (or access point if you provide one by using\n AccessPointArn
). You can also include subdirectories using forward slashes (for\n example, /path/to/folder
).
"
+ "smithy.api#documentation": "Specifies a mount path for your Amazon EFS file system. This is where DataSync reads or writes data on your file system (depending on if this is a source or destination location).
\n By default, DataSync uses the root directory (or access point if you provide one by using\n AccessPointArn
). You can also include subdirectories using forward slashes (for\n example, /path/to/folder
).
"
}
},
"EfsFilesystemArn": {
@@ -714,27 +714,27 @@
"FsxFilesystemArn": {
"target": "com.amazonaws.datasync#FsxFilesystemArn",
"traits": {
- "smithy.api#documentation": "The Amazon Resource Name (ARN) for the FSx for Lustre file system.
",
+ "smithy.api#documentation": "Specifies the Amazon Resource Name (ARN) of the FSx for Lustre file system.
",
"smithy.api#required": {}
}
},
"SecurityGroupArns": {
"target": "com.amazonaws.datasync#Ec2SecurityGroupArnList",
"traits": {
- "smithy.api#documentation": "The Amazon Resource Names (ARNs) of the security groups that are used to configure the\n FSx for Lustre file system.
",
+ "smithy.api#documentation": "Specifies the Amazon Resource Names (ARNs) of up to five security groups that provide access to your\n FSx for Lustre file system.
\n The security groups must be able to access the file system's ports. The file system must\n also allow access from the security groups. For information about file system access, see the\n \n Amazon FSx for Lustre User Guide\n .
",
"smithy.api#required": {}
}
},
"Subdirectory": {
"target": "com.amazonaws.datasync#FsxLustreSubdirectory",
"traits": {
- "smithy.api#documentation": "A subdirectory in the location's path. This subdirectory in the FSx for Lustre\n file system is used to read data from the FSx for Lustre source location or write\n data to the FSx for Lustre destination.
"
+ "smithy.api#documentation": "Specifies a mount path for your FSx for Lustre file system. The path can include subdirectories.
\n When the location is used as a source, DataSync reads data from the mount path. When the location is used as a destination, DataSync writes data to the mount path. If you don't include this parameter, DataSync uses the file system's root directory (/
).
"
}
},
"Tags": {
"target": "com.amazonaws.datasync#InputTagList",
"traits": {
- "smithy.api#documentation": "The key-value pair that represents a tag that you want to add to the resource. The value\n can be an empty string. This value helps you manage, filter, and search for your resources. We\n recommend that you create a name tag for your location.
"
+ "smithy.api#documentation": "Specifies labels that help you categorize, filter, and search for your Amazon Web Services resources. We recommend creating at least a name tag for your location.
"
}
}
},
@@ -748,7 +748,7 @@
"LocationArn": {
"target": "com.amazonaws.datasync#LocationArn",
"traits": {
- "smithy.api#documentation": "The Amazon Resource Name (ARN) of the FSx for Lustre file system location that's\n created.
"
+ "smithy.api#documentation": "The Amazon Resource Name (ARN) of the FSx for Lustre file system location that\n you created.
"
}
}
},
@@ -802,7 +802,7 @@
"Subdirectory": {
"target": "com.amazonaws.datasync#FsxOntapSubdirectory",
"traits": {
- "smithy.api#documentation": "Specifies a path to the file share in the SVM where you'll copy your data.
\n You can specify a junction path (also known as a mount point), qtree path (for NFS file\n shares), or share name (for SMB file shares). For example, your mount path might be\n /vol1
, /vol1/tree1
, or /share1
.
\n \n Don't specify a junction path in the SVM's root volume. For more information, see Managing FSx for ONTAP storage virtual machines in the Amazon FSx for NetApp ONTAP User Guide.
\n "
+ "smithy.api#documentation": "Specifies a path to the file share in the SVM where you want to transfer data to or from.
\n You can specify a junction path (also known as a mount point), qtree path (for NFS file\n shares), or share name (for SMB file shares). For example, your mount path might be\n /vol1
, /vol1/tree1
, or /share1
.
\n \n Don't specify a junction path in the SVM's root volume. For more information, see Managing FSx for ONTAP storage virtual machines in the Amazon FSx for NetApp ONTAP User Guide.
\n "
}
},
"Tags": {
@@ -964,7 +964,7 @@
"Domain": {
"target": "com.amazonaws.datasync#SmbDomain",
"traits": {
- "smithy.api#documentation": "Specifies the name of the Microsoft Active Directory domain that the FSx for Windows File Server file system belongs to.
\n If you have multiple Active Directory domains in your environment, configuring this\n parameter makes sure that DataSync connects to the right file system.
"
+ "smithy.api#documentation": "Specifies the name of the Windows domain that the FSx for Windows File Server file system belongs to.
\n If you have multiple Active Directory domains in your environment, configuring this\n parameter makes sure that DataSync connects to the right file system.
"
}
},
"Password": {
@@ -1133,7 +1133,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Creates a transfer location for a Network File System (NFS) file\n server. DataSync can use this location as a source or destination for\n transferring data.
\n Before you begin, make sure that you understand how DataSync\n accesses\n NFS file servers.
\n \n If you're copying data to or from an Snowcone device, you can also use\n CreateLocationNfs
to create your transfer location. For more information, see\n Configuring transfers with Snowcone.
\n "
+ "smithy.api#documentation": "Creates a transfer location for a Network File System (NFS) file\n server. DataSync can use this location as a source or destination for\n transferring data.
\n Before you begin, make sure that you understand how DataSync\n accesses\n NFS file servers.
"
}
},
"com.amazonaws.datasync#CreateLocationNfsRequest": {
@@ -4112,6 +4112,21 @@
{
"target": "com.amazonaws.datasync#UpdateLocationAzureBlob"
},
+ {
+ "target": "com.amazonaws.datasync#UpdateLocationEfs"
+ },
+ {
+ "target": "com.amazonaws.datasync#UpdateLocationFsxLustre"
+ },
+ {
+ "target": "com.amazonaws.datasync#UpdateLocationFsxOntap"
+ },
+ {
+ "target": "com.amazonaws.datasync#UpdateLocationFsxOpenZfs"
+ },
+ {
+ "target": "com.amazonaws.datasync#UpdateLocationFsxWindows"
+ },
{
"target": "com.amazonaws.datasync#UpdateLocationHdfs"
},
@@ -4121,6 +4136,9 @@
{
"target": "com.amazonaws.datasync#UpdateLocationObjectStorage"
},
+ {
+ "target": "com.amazonaws.datasync#UpdateLocationS3"
+ },
{
"target": "com.amazonaws.datasync#UpdateLocationSmb"
},
@@ -5197,7 +5215,7 @@
}
},
"traits": {
- "smithy.api#documentation": "Specifies the Network File System (NFS) protocol configuration that DataSync\n uses to access your Amazon FSx for OpenZFS or Amazon FSx for NetApp ONTAP file\n system.
"
+ "smithy.api#documentation": "Specifies the Network File System (NFS) protocol configuration that DataSync\n uses to access your FSx for OpenZFS file system or FSx for ONTAP file\n system's storage virtual machine (SVM).
"
}
},
"com.amazonaws.datasync#FsxProtocolSmb": {
@@ -5206,7 +5224,7 @@
"Domain": {
"target": "com.amazonaws.datasync#SmbDomain",
"traits": {
- "smithy.api#documentation": "Specifies the fully qualified domain name (FQDN) of the Microsoft Active Directory that\n your storage virtual machine (SVM) belongs to.
\n If you have multiple domains in your environment, configuring this setting makes sure that\n DataSync connects to the right SVM.
"
+ "smithy.api#documentation": "Specifies the name of the Windows domain that your storage virtual machine (SVM) belongs to.
\n If you have multiple domains in your environment, configuring this setting makes sure that\n DataSync connects to the right SVM.
\n If you have multiple Active Directory domains in your environment, configuring this parameter makes sure that DataSync connects to the right SVM.
"
}
},
"MountOptions": {
@@ -5228,7 +5246,63 @@
}
},
"traits": {
- "smithy.api#documentation": "Specifies the Server Message Block (SMB) protocol configuration that DataSync uses to access your Amazon FSx for NetApp ONTAP file system. For more information, see\n Accessing FSx for ONTAP file systems.
"
+ "smithy.api#documentation": "Specifies the Server Message Block (SMB) protocol configuration that DataSync uses to access your Amazon FSx for NetApp ONTAP file system's storage virtual machine (SVM). For more information, see\n Providing DataSync access to FSx for ONTAP file systems.
"
+ }
+ },
+ "com.amazonaws.datasync#FsxUpdateProtocol": {
+ "type": "structure",
+ "members": {
+ "NFS": {
+ "target": "com.amazonaws.datasync#FsxProtocolNfs"
+ },
+ "SMB": {
+ "target": "com.amazonaws.datasync#FsxUpdateProtocolSmb",
+ "traits": {
+ "smithy.api#documentation": "Specifies the Server Message Block (SMB) protocol configuration that DataSync\n uses to access your FSx for ONTAP file system's storage virtual machine (SVM).
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Specifies the data transfer protocol that DataSync uses to access your\n Amazon FSx file system.
\n \n You can't update the Network File System (NFS) protocol configuration for FSx for ONTAP locations. DataSync currently only supports NFS version 3 with this location type.
\n "
+ }
+ },
+ "com.amazonaws.datasync#FsxUpdateProtocolSmb": {
+ "type": "structure",
+ "members": {
+ "Domain": {
+ "target": "com.amazonaws.datasync#FsxUpdateSmbDomain",
+ "traits": {
+ "smithy.api#documentation": "Specifies the name of the Windows domain that your storage virtual machine (SVM) belongs to.
\n If you have multiple Active Directory domains in your environment, configuring this parameter makes sure that DataSync connects to the right SVM.
"
+ }
+ },
+ "MountOptions": {
+ "target": "com.amazonaws.datasync#SmbMountOptions"
+ },
+ "Password": {
+ "target": "com.amazonaws.datasync#SmbPassword",
+ "traits": {
+ "smithy.api#documentation": "Specifies the password of a user who has permission to access your SVM.
"
+ }
+ },
+ "User": {
+ "target": "com.amazonaws.datasync#SmbUser",
+ "traits": {
+ "smithy.api#documentation": "Specifies a user that can mount and access the files, folders, and metadata in your SVM.
\n For information about choosing a user with the right level of access for your transfer, see Using\n the SMB protocol.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Specifies the Server Message Block (SMB) protocol configuration that DataSync uses to access your Amazon FSx for NetApp ONTAP file system's storage virtual machine (SVM). For more information, see\n Providing DataSync access to FSx for ONTAP file systems.
"
+ }
+ },
+ "com.amazonaws.datasync#FsxUpdateSmbDomain": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 253
+ },
+ "smithy.api#pattern": "^([A-Za-z0-9]((\\.|-+)?[A-Za-z0-9]){0,252})?$"
}
},
"com.amazonaws.datasync#FsxWindowsSubdirectory": {
@@ -7750,7 +7824,7 @@
}
},
"traits": {
- "smithy.api#documentation": "Specifies the Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that DataSync uses to access your S3 bucket.
\n For more information, see Accessing\n S3 buckets.
"
+ "smithy.api#documentation": "Specifies the Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that DataSync uses to access your S3 bucket.
\n For more information, see Providing DataSync access to S3 buckets.
"
}
},
"com.amazonaws.datasync#S3ManifestConfig": {
@@ -9174,7 +9248,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Modifies some configurations of the Microsoft Azure Blob Storage transfer location that you're using with DataSync.
"
+ "smithy.api#documentation": "Modifies the following configurations of the Microsoft Azure Blob Storage transfer location that you're using with DataSync.
\n For more information, see Configuring DataSync transfers with Azure Blob Storage.
"
}
},
"com.amazonaws.datasync#UpdateLocationAzureBlobRequest": {
@@ -9235,6 +9309,291 @@
"smithy.api#output": {}
}
},
+ "com.amazonaws.datasync#UpdateLocationEfs": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.datasync#UpdateLocationEfsRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.datasync#UpdateLocationEfsResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.datasync#InternalException"
+ },
+ {
+ "target": "com.amazonaws.datasync#InvalidRequestException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Modifies the following configuration parameters of the Amazon EFS transfer location that you're using with DataSync.
\n For more information, see Configuring DataSync transfers with Amazon EFS.
"
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationEfsRequest": {
+ "type": "structure",
+ "members": {
+ "LocationArn": {
+ "target": "com.amazonaws.datasync#LocationArn",
+ "traits": {
+ "smithy.api#documentation": "Specifies the Amazon Resource Name (ARN) of the Amazon EFS transfer location that you're updating.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Subdirectory": {
+ "target": "com.amazonaws.datasync#EfsSubdirectory",
+ "traits": {
+ "smithy.api#documentation": "Specifies a mount path for your Amazon EFS file system. This is where DataSync reads or writes data on your file system (depending on if this is a source or destination location).
\n By default, DataSync uses the root directory (or access point if you provide one by using\n AccessPointArn
). You can also include subdirectories using forward slashes (for\n example, /path/to/folder
).
"
+ }
+ },
+ "AccessPointArn": {
+ "target": "com.amazonaws.datasync#UpdatedEfsAccessPointArn",
+ "traits": {
+ "smithy.api#documentation": "Specifies the Amazon Resource Name (ARN) of the access point that DataSync uses\n to mount your Amazon EFS file system.
\n For more information, see Accessing restricted Amazon EFS file systems.
"
+ }
+ },
+ "FileSystemAccessRoleArn": {
+ "target": "com.amazonaws.datasync#UpdatedEfsIamRoleArn",
+ "traits": {
+ "smithy.api#documentation": "Specifies an Identity and Access Management (IAM) role that allows DataSync to access your Amazon EFS file system.
\n For information on creating this role, see Creating a DataSync IAM role for Amazon EFS file system access.
"
+ }
+ },
+ "InTransitEncryption": {
+ "target": "com.amazonaws.datasync#EfsInTransitEncryption",
+ "traits": {
+ "smithy.api#documentation": "Specifies whether you want DataSync to use Transport Layer Security (TLS) 1.2\n encryption when it transfers data to or from your Amazon EFS file system.
\n If you specify an access point using AccessPointArn
or an IAM\n role using FileSystemAccessRoleArn
, you must set this parameter to\n TLS1_2
.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationEfsResponse": {
+ "type": "structure",
+ "members": {},
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationFsxLustre": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.datasync#UpdateLocationFsxLustreRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.datasync#UpdateLocationFsxLustreResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.datasync#InternalException"
+ },
+ {
+ "target": "com.amazonaws.datasync#InvalidRequestException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Modifies the following configuration parameters of the Amazon FSx for Lustre transfer location that you're using with DataSync.
\n For more information, see Configuring DataSync transfers with FSx for Lustre.
"
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationFsxLustreRequest": {
+ "type": "structure",
+ "members": {
+ "LocationArn": {
+ "target": "com.amazonaws.datasync#LocationArn",
+ "traits": {
+ "smithy.api#documentation": "Specifies the Amazon Resource Name (ARN) of the FSx for Lustre transfer location that you're updating.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Subdirectory": {
+ "target": "com.amazonaws.datasync#SmbSubdirectory",
+ "traits": {
+ "smithy.api#documentation": "Specifies a mount path for your FSx for Lustre file system. The path can include subdirectories.
\n When the location is used as a source, DataSync reads data from the mount path. When the location is used as a destination, DataSync writes data to the mount path. If you don't include this parameter, DataSync uses the file system's root directory (/
).
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationFsxLustreResponse": {
+ "type": "structure",
+ "members": {},
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationFsxOntap": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.datasync#UpdateLocationFsxOntapRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.datasync#UpdateLocationFsxOntapResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.datasync#InternalException"
+ },
+ {
+ "target": "com.amazonaws.datasync#InvalidRequestException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Modifies the following configuration parameters of the Amazon FSx for NetApp ONTAP transfer location that you're using with DataSync.
\n For more information, see Configuring DataSync transfers with FSx for ONTAP.
"
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationFsxOntapRequest": {
+ "type": "structure",
+ "members": {
+ "LocationArn": {
+ "target": "com.amazonaws.datasync#LocationArn",
+ "traits": {
+ "smithy.api#documentation": "Specifies the Amazon Resource Name (ARN) of the FSx for ONTAP transfer location that you're updating.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Protocol": {
+ "target": "com.amazonaws.datasync#FsxUpdateProtocol",
+ "traits": {
+ "smithy.api#documentation": "Specifies the data transfer protocol that DataSync uses to access your Amazon FSx file system.
"
+ }
+ },
+ "Subdirectory": {
+ "target": "com.amazonaws.datasync#FsxOntapSubdirectory",
+ "traits": {
+ "smithy.api#documentation": "Specifies a path to the file share in the storage virtual machine (SVM) where you want to transfer data to or from.
\n You can specify a junction path (also known as a mount point), qtree path (for NFS file\n shares), or share name (for SMB file shares). For example, your mount path might be\n /vol1
, /vol1/tree1
, or /share1
.
\n \n Don't specify a junction path in the SVM's root volume. For more information, see Managing FSx for ONTAP storage virtual machines in the Amazon FSx for NetApp ONTAP User Guide.
\n "
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationFsxOntapResponse": {
+ "type": "structure",
+ "members": {},
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationFsxOpenZfs": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.datasync#UpdateLocationFsxOpenZfsRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.datasync#UpdateLocationFsxOpenZfsResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.datasync#InternalException"
+ },
+ {
+ "target": "com.amazonaws.datasync#InvalidRequestException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Modifies the following configuration parameters of the Amazon FSx for OpenZFS transfer location that you're using with DataSync.
\n For more information, see Configuring DataSync transfers with FSx for OpenZFS.
\n \n Request parameters related to SMB
aren't supported with the\n UpdateLocationFsxOpenZfs
operation.
\n "
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationFsxOpenZfsRequest": {
+ "type": "structure",
+ "members": {
+ "LocationArn": {
+ "target": "com.amazonaws.datasync#LocationArn",
+ "traits": {
+ "smithy.api#documentation": "Specifies the Amazon Resource Name (ARN) of the FSx for OpenZFS transfer location that you're updating.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Protocol": {
+ "target": "com.amazonaws.datasync#FsxProtocol"
+ },
+ "Subdirectory": {
+ "target": "com.amazonaws.datasync#SmbSubdirectory",
+ "traits": {
+ "smithy.api#documentation": "Specifies a subdirectory in the location's path that must begin with /fsx
. DataSync uses this subdirectory to read or write data (depending on whether the file\n system is a source or destination location).
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationFsxOpenZfsResponse": {
+ "type": "structure",
+ "members": {},
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationFsxWindows": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.datasync#UpdateLocationFsxWindowsRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.datasync#UpdateLocationFsxWindowsResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.datasync#InternalException"
+ },
+ {
+ "target": "com.amazonaws.datasync#InvalidRequestException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Modifies the following configuration parameters of the Amazon FSx for Windows File Server transfer location that you're using with DataSync.
\n For more information, see Configuring DataSync transfers with FSx for Windows File Server.
"
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationFsxWindowsRequest": {
+ "type": "structure",
+ "members": {
+ "LocationArn": {
+ "target": "com.amazonaws.datasync#LocationArn",
+ "traits": {
+ "smithy.api#documentation": "Specifies the ARN of the FSx for Windows File Server transfer location that you're updating.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Subdirectory": {
+ "target": "com.amazonaws.datasync#FsxWindowsSubdirectory",
+ "traits": {
+ "smithy.api#documentation": "Specifies a mount path for your file system using forward slashes. DataSync uses this subdirectory to read or write data (depending on whether the file\n system is a source or destination location).
"
+ }
+ },
+ "Domain": {
+ "target": "com.amazonaws.datasync#FsxUpdateSmbDomain",
+ "traits": {
+ "smithy.api#documentation": "Specifies the name of the Windows domain that your FSx for Windows File Server file system belongs to.
\n If you have multiple Active Directory domains in your environment, configuring this parameter makes sure that DataSync connects to the right file system.
"
+ }
+ },
+ "User": {
+ "target": "com.amazonaws.datasync#SmbUser",
+ "traits": {
+ "smithy.api#documentation": "Specifies the user with the permissions to mount and access the files, folders, and file\n metadata in your FSx for Windows File Server file system.
\n For information about choosing a user with the right level of access for your transfer, see required permissions for FSx for Windows File Server locations.
"
+ }
+ },
+ "Password": {
+ "target": "com.amazonaws.datasync#SmbPassword",
+ "traits": {
+ "smithy.api#documentation": "Specifies the password of the user with the permissions to mount and access the files,\n folders, and file metadata in your FSx for Windows File Server file system.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationFsxWindowsResponse": {
+ "type": "structure",
+ "members": {},
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.datasync#UpdateLocationHdfs": {
"type": "operation",
"input": {
@@ -9252,7 +9611,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Updates some parameters of a previously created location for a Hadoop Distributed File\n System cluster.
"
+ "smithy.api#documentation": "Modifies the following configuration parameters of the Hadoop Distributed File\n System (HDFS) transfer location that you're using with DataSync.
\n For more information, see Configuring DataSync transfers with an HDFS cluster.
"
}
},
"com.amazonaws.datasync#UpdateLocationHdfsRequest": {
@@ -9366,7 +9725,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Modifies some configurations of the Network File System (NFS) transfer location that\n you're using with DataSync.
\n For more information, see Configuring transfers to or from an\n NFS file server.
"
+ "smithy.api#documentation": "Modifies the following configuration parameters of the Network File System (NFS) transfer location that you're using with DataSync.
\n For more information, see Configuring transfers with an\n NFS file server.
"
}
},
"com.amazonaws.datasync#UpdateLocationNfsRequest": {
@@ -9420,7 +9779,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Updates some parameters of an existing DataSync location for an object\n storage system.
"
+ "smithy.api#documentation": "Modifies the following configuration parameters of the object storage transfer location that you're using with DataSync.
\n For more information, see Configuring DataSync transfers with an object storage system.
"
}
},
"com.amazonaws.datasync#UpdateLocationObjectStorageRequest": {
@@ -9487,6 +9846,63 @@
"smithy.api#output": {}
}
},
+ "com.amazonaws.datasync#UpdateLocationS3": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.datasync#UpdateLocationS3Request"
+ },
+ "output": {
+ "target": "com.amazonaws.datasync#UpdateLocationS3Response"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.datasync#InternalException"
+ },
+ {
+ "target": "com.amazonaws.datasync#InvalidRequestException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Modifies the following configuration parameters of the Amazon S3 transfer location that you're using with DataSync.
\n \n Before you begin, make sure that you read the following topics:
\n \n "
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationS3Request": {
+ "type": "structure",
+ "members": {
+ "LocationArn": {
+ "target": "com.amazonaws.datasync#LocationArn",
+ "traits": {
+ "smithy.api#documentation": "Specifies the Amazon Resource Name (ARN) of the Amazon S3 transfer location that you're updating.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Subdirectory": {
+ "target": "com.amazonaws.datasync#S3Subdirectory",
+ "traits": {
+ "smithy.api#documentation": "Specifies a prefix in the S3 bucket that DataSync reads from or writes to\n (depending on whether the bucket is a source or destination location).
\n \n DataSync can't transfer objects with a prefix that begins with a slash\n (/
) or includes //
, /./
, or\n /../
patterns. For example:
\n \n - \n
\n /photos
\n
\n \n - \n
\n photos//2006/January
\n
\n \n - \n
\n photos/./2006/February
\n
\n \n - \n
\n photos/../2006/March
\n
\n \n
\n "
+ }
+ },
+ "S3StorageClass": {
+ "target": "com.amazonaws.datasync#S3StorageClass",
+ "traits": {
+ "smithy.api#documentation": "Specifies the storage class that you want your objects to use when Amazon S3 is a\n transfer destination.
\n For buckets in Amazon Web Services Regions, the storage class defaults to\n STANDARD
. For buckets on Outposts, the storage class defaults to\n OUTPOSTS
.
\n For more information, see Storage class\n considerations with Amazon S3 transfers.
"
+ }
+ },
+ "S3Config": {
+ "target": "com.amazonaws.datasync#S3Config"
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.datasync#UpdateLocationS3Response": {
+ "type": "structure",
+ "members": {},
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.datasync#UpdateLocationSmb": {
"type": "operation",
"input": {
@@ -9504,7 +9920,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Updates some of the parameters of a Server Message Block\n (SMB) file server location that you can use for DataSync transfers.
"
+ "smithy.api#documentation": "Modifies the following configuration parameters of the Server Message Block\n (SMB) transfer location that you're using with DataSync.
\n For more information, see Configuring DataSync transfers with an SMB file server.
"
}
},
"com.amazonaws.datasync#UpdateLocationSmbRequest": {
@@ -9773,6 +10189,26 @@
"smithy.api#output": {}
}
},
+ "com.amazonaws.datasync#UpdatedEfsAccessPointArn": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 128
+ },
+ "smithy.api#pattern": "^(^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):elasticfilesystem:[a-z\\-0-9]+:[0-9]{12}:access-point/fsap-[0-9a-f]{8,40}$)|(^$)$"
+ }
+ },
+ "com.amazonaws.datasync#UpdatedEfsIamRoleArn": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 2048
+ },
+ "smithy.api#pattern": "^(^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):iam::[0-9]{12}:role/.*$)|(^$)$"
+ }
+ },
"com.amazonaws.datasync#VerifyMode": {
"type": "enum",
"members": {
diff --git a/codegen/sdk-codegen/aws-models/dlm.json b/codegen/sdk-codegen/aws-models/dlm.json
index 235ee067f33..e6a6e6e1490 100644
--- a/codegen/sdk-codegen/aws-models/dlm.json
+++ b/codegen/sdk-codegen/aws-models/dlm.json
@@ -306,7 +306,7 @@
"Location": {
"target": "com.amazonaws.dlm#LocationValues",
"traits": {
- "smithy.api#documentation": "\n [Custom snapshot policies only] Specifies the destination for snapshots created by the policy. To create \n\t\t\tsnapshots in the same Region as the source resource, specify CLOUD
. To create \n\t\t\tsnapshots on the same Outpost as the source resource, specify OUTPOST_LOCAL
. \n\t\t\tIf you omit this parameter, CLOUD
is used by default.
\n If the policy targets resources in an Amazon Web Services Region, then you must create \n\t\t\tsnapshots in the same Region as the source resource. If the policy targets resources on an \n\t\t\tOutpost, then you can create snapshots on the same Outpost as the source resource, or in \n\t\t\tthe Region of that Outpost.
"
+ "smithy.api#documentation": "\n [Custom snapshot policies only] Specifies the destination for snapshots created by the policy. The \n\t\t\tallowed destinations depend on the location of the targeted resources.
\n \n - \n
If the policy targets resources in a Region, then you must create snapshots \n\t\t\t\t\tin the same Region as the source resource.
\n \n - \n
If the policy targets resources in a Local Zone, you can create snapshots in \n\t\t\t\t\tthe same Local Zone or in its parent Region.
\n \n - \n
If the policy targets resources on an Outpost, then you can create snapshots \n\t\t\t\t\ton the same Outpost or in its parent Region.
\n \n
\n Specify one of the following values:
\n \n - \n
To create snapshots in the same Region as the source resource, specify \n\t\t\t\t\tCLOUD
.
\n \n - \n
To create snapshots in the same Local Zone as the source resource, specify \n\t\t\t\t\tLOCAL_ZONE
.
\n \n - \n
To create snapshots on the same Outpost as the source resource, specify \n\t\t\t\t\tOUTPOST_LOCAL
.
\n \n
\n Default: CLOUD
\n
"
}
},
"Interval": {
@@ -330,7 +330,7 @@
"CronExpression": {
"target": "com.amazonaws.dlm#CronExpression",
"traits": {
- "smithy.api#documentation": "The schedule, as a Cron expression. The schedule interval must be between 1 hour and 1\n\t\t\tyear. For more information, see Cron\n\t\t\t\texpressions in the Amazon CloudWatch User Guide.
"
+ "smithy.api#documentation": "The schedule, as a Cron expression. The schedule interval must be between 1 hour and 1\n\t\t\tyear. For more information, see the Cron expressions reference in \n\t\t\tthe Amazon EventBridge User Guide.
"
}
},
"Scripts": {
@@ -1204,12 +1204,12 @@
"DefaultPolicy": {
"target": "com.amazonaws.dlm#DefaultPolicy",
"traits": {
- "smithy.api#documentation": "\n [Default policies only] The type of default policy. Values include:
\n "
+ "smithy.api#documentation": "Indicates whether the policy is a default lifecycle policy or a custom \n\t\t\tlifecycle policy.
\n "
}
}
},
"traits": {
- "smithy.api#documentation": "\n [Custom policies only] Detailed information about a snapshot, AMI, or event-based lifecycle policy.
"
+ "smithy.api#documentation": "Information about a lifecycle policy.
"
}
},
"com.amazonaws.dlm#LifecyclePolicySummary": {
@@ -1356,6 +1356,12 @@
"traits": {
"smithy.api#enumValue": "OUTPOST_LOCAL"
}
+ },
+ "LOCAL_ZONE": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "LOCAL_ZONE"
+ }
}
}
},
@@ -1423,7 +1429,7 @@
"PolicyType": {
"target": "com.amazonaws.dlm#PolicyTypeValues",
"traits": {
- "smithy.api#documentation": "\n [Custom policies only] The valid target resource types and actions a policy can manage. Specify EBS_SNAPSHOT_MANAGEMENT
\n\t\t\tto create a lifecycle policy that manages the lifecycle of Amazon EBS snapshots. Specify IMAGE_MANAGEMENT
\n\t\t\tto create a lifecycle policy that manages the lifecycle of EBS-backed AMIs. Specify EVENT_BASED_POLICY
\n\t\t\tto create an event-based policy that performs specific actions when a defined event occurs in your Amazon Web Services account.
\n The default is EBS_SNAPSHOT_MANAGEMENT
.
"
+ "smithy.api#documentation": "The type of policy. Specify EBS_SNAPSHOT_MANAGEMENT
\n\t\t\tto create a lifecycle policy that manages the lifecycle of Amazon EBS snapshots. Specify IMAGE_MANAGEMENT
\n\t\t\tto create a lifecycle policy that manages the lifecycle of EBS-backed AMIs. Specify EVENT_BASED_POLICY
\n\t\t\tto create an event-based policy that performs specific actions when a defined event occurs in your Amazon Web Services account.
\n The default is EBS_SNAPSHOT_MANAGEMENT
.
"
}
},
"ResourceTypes": {
@@ -1435,7 +1441,7 @@
"ResourceLocations": {
"target": "com.amazonaws.dlm#ResourceLocationList",
"traits": {
- "smithy.api#documentation": "\n [Custom snapshot and AMI policies only] The location of the resources to backup. If the source resources are located in an \n\t\t\tAmazon Web Services Region, specify CLOUD
. If the source resources are located on an Outpost \n\t\t\tin your account, specify OUTPOST
.
\n If you specify OUTPOST
, Amazon Data Lifecycle Manager backs up all resources \n\t\t\t\tof the specified type with matching target tags across all of the Outposts in your account.
"
+ "smithy.api#documentation": "\n [Custom snapshot and AMI policies only] The location of the resources to backup.
\n \n - \n
If the source resources are located in a Region, specify CLOUD
. In this case, \n\t\t\t\t\tthe policy targets all resources of the specified type with matching target tags across all \n\t\t\t\t\tAvailability Zones in the Region.
\n \n - \n
\n [Custom snapshot policies only] If the source resources are located in a Local Zone, specify LOCAL_ZONE
. \n\t\t\t\t\tIn this case, the policy targets all resources of the specified type with matching target \n\t\t\t\t\ttags across all Local Zones in the Region.
\n \n - \n
If the source resources are located on an Outpost in your account, specify OUTPOST
. \n\t\t\t\t\tIn this case, the policy targets all resources of the specified type with matching target \n\t\t\t\t\ttags across all of the Outposts in your account.
\n \n
\n "
}
},
"TargetTags": {
@@ -1603,6 +1609,12 @@
"traits": {
"smithy.api#enumValue": "OUTPOST"
}
+ },
+ "LOCAL_ZONE": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "LOCAL_ZONE"
+ }
}
}
},
@@ -1800,7 +1812,7 @@
"CrossRegionCopyRules": {
"target": "com.amazonaws.dlm#CrossRegionCopyRules",
"traits": {
- "smithy.api#documentation": "Specifies a rule for copying snapshots or AMIs across regions.
\n \n You can't specify cross-Region copy rules for policies that create snapshots on an Outpost. \n\t\t\tIf the policy creates snapshots in a Region, then snapshots can be copied to up to three \n\t\t\tRegions or Outposts.
\n "
+ "smithy.api#documentation": "Specifies a rule for copying snapshots or AMIs across Regions.
\n \n You can't specify cross-Region copy rules for policies that create snapshots on an \n\t\t\t\tOutpost or in a Local Zone. If the policy creates snapshots in a Region, then snapshots \n\t\t\t\tcan be copied to up to three Regions or Outposts.
\n "
}
},
"ShareRules": {
diff --git a/codegen/sdk-codegen/aws-models/ec2.json b/codegen/sdk-codegen/aws-models/ec2.json
index 0d54e9f911e..336ecc33085 100644
--- a/codegen/sdk-codegen/aws-models/ec2.json
+++ b/codegen/sdk-codegen/aws-models/ec2.json
@@ -19805,7 +19805,7 @@
"target": "com.amazonaws.ec2#Snapshot"
},
"traits": {
- "smithy.api#documentation": "Creates a snapshot of an EBS volume and stores it in Amazon S3. You can use snapshots for\n \tbackups, to make copies of EBS volumes, and to save data before shutting down an\n \tinstance.
\n You can create snapshots of volumes in a Region and volumes on an Outpost. If you \n \tcreate a snapshot of a volume in a Region, the snapshot must be stored in the same \n \tRegion as the volume. If you create a snapshot of a volume on an Outpost, the snapshot \n \tcan be stored on the same Outpost as the volume, or in the Region for that Outpost.
\n When a snapshot is created, any Amazon Web Services Marketplace product codes that are associated with the\n source volume are propagated to the snapshot.
\n You can take a snapshot of an attached volume that is in use. However, snapshots only\n capture data that has been written to your Amazon EBS volume at the time the snapshot command is\n issued; this might exclude any data that has been cached by any applications or the operating\n system. If you can pause any file systems on the volume long enough to take a snapshot, your\n snapshot should be complete. However, if you cannot pause all file writes to the volume, you\n should unmount the volume from within the instance, issue the snapshot command, and then\n remount the volume to ensure a consistent and complete snapshot. You may remount and use your\n volume while the snapshot status is pending
.
\n When you create a snapshot for an EBS volume that serves as a root device, we recommend \n that you stop the instance before taking the snapshot.
\n Snapshots that are taken from encrypted volumes are automatically encrypted. Volumes that\n are created from encrypted snapshots are also automatically encrypted. Your encrypted volumes\n and any associated snapshots always remain protected.
\n You can tag your snapshots during creation. For more information, see Tag your Amazon EC2\n resources in the Amazon EC2 User Guide.
\n For more information, see Amazon EBS and Amazon EBS encryption in the Amazon EBS User Guide.
",
+ "smithy.api#documentation": "Creates a snapshot of an EBS volume and stores it in Amazon S3. You can use snapshots for\n \tbackups, to make copies of EBS volumes, and to save data before shutting down an\n \tinstance.
\n The location of the source EBS volume determines where you can create the snapshot.
\n \n - \n
If the source volume is in a Region, you must create the snapshot in the same \n Region as the volume.
\n \n - \n
If the source volume is in a Local Zone, you can create the snapshot in the same \n Local Zone or in parent Amazon Web Services Region.
\n \n - \n
If the source volume is on an Outpost, you can create the snapshot on the same \n Outpost or in its parent Amazon Web Services Region.
\n \n
\n When a snapshot is created, any Amazon Web Services Marketplace product codes that are associated with the\n source volume are propagated to the snapshot.
\n You can take a snapshot of an attached volume that is in use. However, snapshots only\n capture data that has been written to your Amazon EBS volume at the time the snapshot command is\n issued; this might exclude any data that has been cached by any applications or the operating\n system. If you can pause any file systems on the volume long enough to take a snapshot, your\n snapshot should be complete. However, if you cannot pause all file writes to the volume, you\n should unmount the volume from within the instance, issue the snapshot command, and then\n remount the volume to ensure a consistent and complete snapshot. You may remount and use your\n volume while the snapshot status is pending
.
\n When you create a snapshot for an EBS volume that serves as a root device, we recommend \n that you stop the instance before taking the snapshot.
\n Snapshots that are taken from encrypted volumes are automatically encrypted. Volumes that\n are created from encrypted snapshots are also automatically encrypted. Your encrypted volumes\n and any associated snapshots always remain protected. For more information, \n Amazon EBS encryption \n in the Amazon EBS User Guide.
",
"smithy.api#examples": [
{
"title": "To create a snapshot",
@@ -19840,7 +19840,7 @@
"OutpostArn": {
"target": "com.amazonaws.ec2#String",
"traits": {
- "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Outpost on which to create a local \n \tsnapshot.
\n \n - \n
To create a snapshot of a volume in a Region, omit this parameter. The snapshot \n \t\t\t\tis created in the same Region as the volume.
\n \n - \n
To create a snapshot of a volume on an Outpost and store the snapshot in the \n \t\t\t\tRegion, omit this parameter. The snapshot is created in the Region for the \n \t\t\t\tOutpost.
\n \n - \n
To create a snapshot of a volume on an Outpost and store the snapshot on an \n \t\t\tOutpost, specify the ARN of the destination Outpost. The snapshot must be created on \n \t\t\tthe same Outpost as the volume.
\n \n
\n For more information, see Create local snapshots from volumes on an Outpost in the Amazon EBS User Guide.
"
+ "smithy.api#documentation": "\n Only supported for volumes on Outposts. If the source volume is not on an Outpost, \n omit this parameter.
\n \n \n - \n
To create the snapshot on the same Outpost as the source volume, specify the \n ARN of that Outpost. The snapshot must be created on the same Outpost as the volume.
\n \n - \n
To create the snapshot in the parent Region of the Outpost, omit this parameter.
\n \n
\n For more information, see Create local snapshots from volumes on an Outpost in the Amazon EBS User Guide.
"
}
},
"VolumeId": {
@@ -19858,6 +19858,12 @@
"smithy.api#xmlName": "TagSpecification"
}
},
+ "Location": {
+ "target": "com.amazonaws.ec2#SnapshotLocationEnum",
+ "traits": {
+ "smithy.api#documentation": "\n Only supported for volumes in Local Zones. If the source volume is not in a Local Zone, \n omit this parameter.
\n \n \n - \n
To create a local snapshot in the same Local Zone as the source volume, specify \n local
.
\n \n - \n
To create a regional snapshot in the parent Region of the Local Zone, specify \n regional
or omit this parameter.
\n \n
\n Default value: regional
\n
"
+ }
+ },
"DryRun": {
"target": "com.amazonaws.ec2#Boolean",
"traits": {
@@ -19880,7 +19886,7 @@
"target": "com.amazonaws.ec2#CreateSnapshotsResult"
},
"traits": {
- "smithy.api#documentation": "Creates crash-consistent snapshots of multiple EBS volumes and stores the data in S3.\n Volumes are chosen by specifying an instance. Any attached volumes will produce one snapshot\n each that is crash-consistent across the instance.
\n You can include all of the volumes currently attached to the instance, or you can exclude \n the root volume or specific data (non-root) volumes from the multi-volume snapshot set.
\n You can create multi-volume snapshots of instances in a Region and instances on an \n \tOutpost. If you create snapshots from an instance in a Region, the snapshots must be stored \n \tin the same Region as the instance. If you create snapshots from an instance on an Outpost, \n \tthe snapshots can be stored on the same Outpost as the instance, or in the Region for that \n \tOutpost.
"
+ "smithy.api#documentation": "Creates crash-consistent snapshots of multiple EBS volumes attached to an Amazon EC2 instance.\n Volumes are chosen by specifying an instance. Each volume attached to the specified instance \n will produce one snapshot that is crash-consistent across the instance. You can include all of \n the volumes currently attached to the instance, or you can exclude the root volume or specific \n data (non-root) volumes from the multi-volume snapshot set.
\n The location of the source instance determines where you can create the snapshots.
\n \n - \n
If the source instance is in a Region, you must create the snapshots in the same \n Region as the instance.
\n \n - \n
If the source instance is in a Local Zone, you can create the snapshots in the same \n Local Zone or in parent Amazon Web Services Region.
\n \n - \n
If the source instance is on an Outpost, you can create the snapshots on the same \n Outpost or in its parent Amazon Web Services Region.
\n \n
"
}
},
"com.amazonaws.ec2#CreateSnapshotsRequest": {
@@ -19903,7 +19909,7 @@
"OutpostArn": {
"target": "com.amazonaws.ec2#String",
"traits": {
- "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Outpost on which to create the local \n \t\tsnapshots.
\n \n - \n
To create snapshots from an instance in a Region, omit this parameter. The \n \t\t\t\tsnapshots are created in the same Region as the instance.
\n \n - \n
To create snapshots from an instance on an Outpost and store the snapshots \n \t\t\t\tin the Region, omit this parameter. The snapshots are created in the Region \n \t\t\t\tfor the Outpost.
\n \n - \n
To create snapshots from an instance on an Outpost and store the snapshots \n \t\t\t\ton an Outpost, specify the ARN of the destination Outpost. The snapshots must \n \t\t\t\tbe created on the same Outpost as the instance.
\n \n
\n For more information, see \n \t\tCreate multi-volume local snapshots from instances on an Outpost in the \n \t\tAmazon EBS User Guide.
"
+ "smithy.api#documentation": "\n Only supported for instances on Outposts. If the source instance is not on an Outpost, \n omit this parameter.
\n \n \n - \n
To create the snapshots on the same Outpost as the source instance, specify the \n ARN of that Outpost. The snapshots must be created on the same Outpost as the instance.
\n \n - \n
To create the snapshots in the parent Region of the Outpost, omit this parameter.
\n \n
\n For more information, see \n Create local snapshots from volumes on an Outpost in the Amazon EBS User Guide.
"
}
},
"TagSpecifications": {
@@ -19924,6 +19930,12 @@
"traits": {
"smithy.api#documentation": "Copies the tags from the specified volume to corresponding snapshot.
"
}
+ },
+ "Location": {
+ "target": "com.amazonaws.ec2#SnapshotLocationEnum",
+ "traits": {
+ "smithy.api#documentation": "\n Only supported for instances in Local Zones. If the source instance is not in a Local Zone, \n omit this parameter.
\n \n \n - \n
To create local snapshots in the same Local Zone as the source instance, specify \n local
.
\n \n - \n
To create a regional snapshots in the parent Region of the Local Zone, specify \n regional
or omit this parameter.
\n \n
\n Default value: regional
\n
"
+ }
}
},
"traits": {
@@ -26471,7 +26483,7 @@
"target": "com.amazonaws.ec2#DeleteSecurityGroupRequest"
},
"output": {
- "target": "smithy.api#Unit"
+ "target": "com.amazonaws.ec2#DeleteSecurityGroupResult"
},
"traits": {
"smithy.api#documentation": "Deletes a security group.
\n If you attempt to delete a security group that is associated with an instance or network interface, is\n\t\t\t referenced by another security group in the same VPC, or has a VPC association, the operation fails with\n\t\t\t\tDependencyViolation
.
",
@@ -26515,6 +26527,30 @@
"smithy.api#input": {}
}
},
+ "com.amazonaws.ec2#DeleteSecurityGroupResult": {
+ "type": "structure",
+ "members": {
+ "Return": {
+ "target": "com.amazonaws.ec2#Boolean",
+ "traits": {
+ "aws.protocols#ec2QueryName": "Return",
+ "smithy.api#documentation": "Returns true
if the request succeeds; otherwise, returns an error.
",
+ "smithy.api#xmlName": "return"
+ }
+ },
+ "GroupId": {
+ "target": "com.amazonaws.ec2#SecurityGroupId",
+ "traits": {
+ "aws.protocols#ec2QueryName": "GroupId",
+ "smithy.api#documentation": "The ID of the deleted security group.
",
+ "smithy.api#xmlName": "groupId"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.ec2#DeleteSnapshot": {
"type": "operation",
"input": {
@@ -52254,7 +52290,7 @@
}
},
"traits": {
- "smithy.api#documentation": "A filter name and value pair that is used to return a more specific list of results from a describe operation. \n Filters can be used to match a set of resources by specific criteria, such as tags, attributes, or IDs.
\n If you specify multiple filters, the filters are joined with an AND
, and the request returns only \n results that match all of the specified filters.
"
+ "smithy.api#documentation": "A filter name and value pair that is used to return a more specific list of results from a describe operation. \n Filters can be used to match a set of resources by specific criteria, such as tags, attributes, or IDs.
\n If you specify multiple filters, the filters are joined with an AND
, and the request returns only \n results that match all of the specified filters.
\n For more information, see List and filter using the CLI and API in the Amazon EC2 User Guide.
"
}
},
"com.amazonaws.ec2#FilterList": {
@@ -100721,6 +100757,14 @@
"smithy.api#xmlName": "sseType"
}
},
+ "AvailabilityZone": {
+ "target": "com.amazonaws.ec2#String",
+ "traits": {
+ "aws.protocols#ec2QueryName": "AvailabilityZone",
+ "smithy.api#documentation": "The Availability Zone or Local Zone of the snapshot. For example, us-west-1a
\n (Availability Zone) or us-west-2-lax-1a
(Local Zone).
",
+ "smithy.api#xmlName": "availabilityZone"
+ }
+ },
"TransferType": {
"target": "com.amazonaws.ec2#TransferType",
"traits": {
@@ -101137,6 +101181,14 @@
"smithy.api#documentation": "Reserved for future use.
",
"smithy.api#xmlName": "sseType"
}
+ },
+ "AvailabilityZone": {
+ "target": "com.amazonaws.ec2#String",
+ "traits": {
+ "aws.protocols#ec2QueryName": "AvailabilityZone",
+ "smithy.api#documentation": "The Availability Zone or Local Zone of the snapshots. For example, us-west-1a
\n (Availability Zone) or us-west-2-lax-1a
(Local Zone).
",
+ "smithy.api#xmlName": "availabilityZone"
+ }
}
},
"traits": {
@@ -101152,6 +101204,23 @@
}
}
},
+ "com.amazonaws.ec2#SnapshotLocationEnum": {
+ "type": "enum",
+ "members": {
+ "REGIONAL": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "regional"
+ }
+ },
+ "LOCAL": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "local"
+ }
+ }
+ }
+ },
"com.amazonaws.ec2#SnapshotRecycleBinInfo": {
"type": "structure",
"members": {
@@ -102869,7 +102938,7 @@
"target": "com.amazonaws.ec2#StartDeclarativePoliciesReportResult"
},
"traits": {
- "smithy.api#documentation": "Generates an account status report. The report is generated asynchronously, and can\n take several hours to complete.
\n The report provides the current status of all attributes supported by declarative\n policies for the accounts within the specified scope. The scope is determined by the\n specified TargetId
, which can represent an individual account, or all the\n accounts that fall under the specified organizational unit (OU) or root (the entire\n Amazon Web Services Organization).
\n The report is saved to your specified S3 bucket, using the following path structure\n (with the italicized placeholders representing your specific\n values):
\n \n s3://amzn-s3-demo-bucket/your-optional-s3-prefix/ec2_targetId_reportId_yyyyMMddThhmmZ.csv
\n
\n \n Prerequisites for generating a report\n
\n \n - \n
The StartDeclarativePoliciesReport
API can only be called by the\n management account or delegated administrators for the organization.
\n \n - \n
An S3 bucket must be available before generating the report (you can create a\n new one or use an existing one), and it must have an appropriate bucket policy.\n For a sample S3 policy, see Sample Amazon S3 policy under\n .
\n \n - \n
Trusted access must be enabled for the service for which the declarative\n policy will enforce a baseline configuration. If you use the Amazon Web Services Organizations\n console, this is done automatically when you enable declarative policies. The\n API uses the following service principal to identify the EC2 service:\n ec2.amazonaws.com
. For more information on how to enable\n trusted access with the Amazon Web Services CLI and Amazon Web Services SDKs, see Using\n Organizations with other Amazon Web Services services in the\n Amazon Web Services Organizations User Guide.
\n \n - \n
Only one report per organization can be generated at a time. Attempting to\n generate a report while another is in progress will result in an error.
\n \n
\n For more information, including the required IAM permissions to run this API, see\n Generating the account status report for declarative policies in the\n Amazon Web Services Organizations User Guide.
"
+ "smithy.api#documentation": "Generates an account status report. The report is generated asynchronously, and can\n take several hours to complete.
\n The report provides the current status of all attributes supported by declarative\n policies for the accounts within the specified scope. The scope is determined by the\n specified TargetId
, which can represent an individual account, or all the\n accounts that fall under the specified organizational unit (OU) or root (the entire\n Amazon Web Services Organization).
\n The report is saved to your specified S3 bucket, using the following path structure\n (with the italicized placeholders representing your specific\n values):
\n \n s3://amzn-s3-demo-bucket/your-optional-s3-prefix/ec2_targetId_reportId_yyyyMMddThhmmZ.csv
\n
\n \n Prerequisites for generating a report\n
\n \n - \n
The StartDeclarativePoliciesReport
API can only be called by the\n management account or delegated administrators for the organization.
\n \n - \n
An S3 bucket must be available before generating the report (you can create a\n new one or use an existing one), it must be in the same Region where the report\n generation request is made, and it must have an appropriate bucket policy. For a\n sample S3 policy, see Sample Amazon S3 policy under .
\n \n - \n
Trusted access must be enabled for the service for which the declarative\n policy will enforce a baseline configuration. If you use the Amazon Web Services Organizations\n console, this is done automatically when you enable declarative policies. The\n API uses the following service principal to identify the EC2 service:\n ec2.amazonaws.com
. For more information on how to enable\n trusted access with the Amazon Web Services CLI and Amazon Web Services SDKs, see Using\n Organizations with other Amazon Web Services services in the\n Amazon Web Services Organizations User Guide.
\n \n - \n
Only one report per organization can be generated at a time. Attempting to\n generate a report while another is in progress will result in an error.
\n \n
\n For more information, including the required IAM permissions to run this API, see\n Generating the account status report for declarative policies in the\n Amazon Web Services Organizations User Guide.
"
}
},
"com.amazonaws.ec2#StartDeclarativePoliciesReportRequest": {
@@ -102885,7 +102954,7 @@
"target": "com.amazonaws.ec2#String",
"traits": {
"smithy.api#clientOptional": {},
- "smithy.api#documentation": "The name of the S3 bucket where the report will be saved.
",
+ "smithy.api#documentation": "The name of the S3 bucket where the report will be saved. The bucket must be in the\n same Region where the report generation request is made.
",
"smithy.api#required": {}
}
},
diff --git a/codegen/sdk-codegen/aws-models/ecs.json b/codegen/sdk-codegen/aws-models/ecs.json
index acd01dd3efe..a18d106fa8d 100644
--- a/codegen/sdk-codegen/aws-models/ecs.json
+++ b/codegen/sdk-codegen/aws-models/ecs.json
@@ -4383,7 +4383,7 @@
"maximumPercent": {
"target": "com.amazonaws.ecs#BoxedInteger",
"traits": {
- "smithy.api#documentation": "If a service is using the rolling update (ECS
) deployment type, the\n\t\t\t\tmaximumPercent
parameter represents an upper limit on the number of\n\t\t\tyour service's tasks that are allowed in the RUNNING
or\n\t\t\t\tPENDING
state during a deployment, as a percentage of the\n\t\t\t\tdesiredCount
(rounded down to the nearest integer). This parameter\n\t\t\tenables you to define the deployment batch size. For example, if your service is using\n\t\t\tthe REPLICA
service scheduler and has a desiredCount
of four\n\t\t\ttasks and a maximumPercent
value of 200%, the scheduler may start four new\n\t\t\ttasks before stopping the four older tasks (provided that the cluster resources required\n\t\t\tto do this are available). The default maximumPercent
value for a service\n\t\t\tusing the REPLICA
service scheduler is 200%.
\n The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting\n\t\t\treplacement tasks first and then stopping the unhealthy tasks, as long as cluster\n\t\t\tresources for starting replacement tasks are available. For more information about how\n\t\t\tthe scheduler replaces unhealthy tasks, see Amazon ECS\n\t\t\tservices.
\n If a service is using either the blue/green (CODE_DEPLOY
) or\n\t\t\t\tEXTERNAL
deployment types, and tasks in the service use the\n\t\t\tEC2 launch type, the maximum percent\n\t\t\tvalue is set to the default value. The maximum percent\n\t\t\tvalue is used to define the upper limit on the number of the tasks in the service that\n\t\t\tremain in the RUNNING
state while the container instances are in the\n\t\t\t\tDRAINING
state.
\n \n You can't specify a custom maximumPercent
value for a service that\n\t\t\t\tuses either the blue/green (CODE_DEPLOY
) or EXTERNAL
\n\t\t\t\tdeployment types and has tasks that use the EC2 launch type.
\n \n If the tasks in the service use the Fargate launch type, the maximum\n\t\t\tpercent value is not used, although it is returned when describing your service.
"
+ "smithy.api#documentation": "If a service is using the rolling update (ECS
) deployment type, the\n\t\t\t\tmaximumPercent
parameter represents an upper limit on the number of\n\t\t\tyour service's tasks that are allowed in the RUNNING
or\n\t\t\t\tPENDING
state during a deployment, as a percentage of the\n\t\t\t\tdesiredCount
(rounded down to the nearest integer). This parameter\n\t\t\tenables you to define the deployment batch size. For example, if your service is using\n\t\t\tthe REPLICA
service scheduler and has a desiredCount
of four\n\t\t\ttasks and a maximumPercent
value of 200%, the scheduler may start four new\n\t\t\ttasks before stopping the four older tasks (provided that the cluster resources required\n\t\t\tto do this are available). The default maximumPercent
value for a service\n\t\t\tusing the REPLICA
service scheduler is 200%.
\n The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting\n\t\t\treplacement tasks first and then stopping the unhealthy tasks, as long as cluster\n\t\t\tresources for starting replacement tasks are available. For more information about how\n\t\t\tthe scheduler replaces unhealthy tasks, see Amazon ECS\n\t\t\tservices.
\n If a service is using either the blue/green (CODE_DEPLOY
) or\n\t\t\t\tEXTERNAL
deployment types, and tasks in the service use the\n\t\t\tEC2 launch type, the maximum percent\n\t\t\tvalue is set to the default value. The maximum percent\n\t\t\tvalue is used to define the upper limit on the number of the tasks in the service that\n\t\t\tremain in the RUNNING
state while the container instances are in the\n\t\t\t\tDRAINING
state.
\n \n You can't specify a custom maximumPercent
value for a service that\n\t\t\t\tuses either the blue/green (CODE_DEPLOY
) or EXTERNAL
\n\t\t\t\tdeployment types and has tasks that use the EC2 launch type.
\n \n If the service uses either the blue/green (CODE_DEPLOY
) or EXTERNAL
\n\t\t\tdeployment types, and the tasks in the service use the Fargate launch type, the maximum\n\t\t\tpercent value is not used. The value is still returned when describing your service.
"
}
},
"minimumHealthyPercent": {
@@ -7422,7 +7422,7 @@
"cluster": {
"target": "com.amazonaws.ecs#String",
"traits": {
- "smithy.api#documentation": "The cluster that hosts the service. This can either be the cluster name or ARN.\n\t\t\tStarting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon\n\t\t\tElastic Inference (EI), and will help current customers migrate their workloads to\n\t\t\toptions that offer better price and performanceIf you don't specify a cluster,\n\t\t\t\tdefault
is used.
"
+ "smithy.api#documentation": "The cluster that hosts the service. This can either be the cluster name or ARN.\n\t\t\tStarting April 15, 2023, Amazon Web Services will not onboard new customers to Amazon\n\t\t\tElastic Inference (EI), and will help current customers migrate their workloads to\n\t\t\toptions that offer better price and performance. If you don't specify a cluster,\n\t\t\t\tdefault
is used.
"
}
},
"status": {
@@ -8148,7 +8148,7 @@
"options": {
"target": "com.amazonaws.ecs#LogConfigurationOptionsMap",
"traits": {
- "smithy.api#documentation": "The configuration options to send to the log driver.
\n The options you can specify depend on the log driver. Some of the options you can\n\t\t\tspecify when you use the awslogs
log driver to route logs to Amazon CloudWatch\n\t\t\tinclude the following:
\n \n - awslogs-create-group
\n - \n
Required: No
\n Specify whether you want the log group to be created automatically. If\n\t\t\t\t\t\tthis option isn't specified, it defaults to false
.
\n \n Your IAM policy must include the logs:CreateLogGroup
\n\t\t\t\t\t\t\tpermission before you attempt to use\n\t\t\t\t\t\t\tawslogs-create-group
.
\n \n \n - awslogs-region
\n - \n
Required: Yes
\n Specify the Amazon Web Services Region that the awslogs
log driver is to\n\t\t\t\t\t\tsend your Docker logs to. You can choose to send all of your logs from\n\t\t\t\t\t\tclusters in different Regions to a single region in CloudWatch Logs. This is so that\n\t\t\t\t\t\tthey're all visible in one location. Otherwise, you can separate them by\n\t\t\t\t\t\tRegion for more granularity. Make sure that the specified log group exists\n\t\t\t\t\t\tin the Region that you specify with this option.
\n \n - awslogs-group
\n - \n
Required: Yes
\n Make sure to specify a log group that the awslogs
log driver\n\t\t\t\t\t\tsends its log streams to.
\n \n - awslogs-stream-prefix
\n - \n
Required: Yes, when using the Fargate launch\n\t\t\t\t\t\t\ttype.Optional for the EC2 launch type,\n\t\t\t\t\t\t\trequired for the Fargate launch type.
\n Use the awslogs-stream-prefix
option to associate a log\n\t\t\t\t\t\tstream with the specified prefix, the container name, and the ID of the\n\t\t\t\t\t\tAmazon ECS task that the container belongs to. If you specify a prefix with this\n\t\t\t\t\t\toption, then the log stream takes the format\n\t\t\t\t\t\t\tprefix-name/container-name/ecs-task-id
.
\n If you don't specify a prefix with this option, then the log stream is\n\t\t\t\t\t\tnamed after the container ID that's assigned by the Docker daemon on the\n\t\t\t\t\t\tcontainer instance. Because it's difficult to trace logs back to the\n\t\t\t\t\t\tcontainer that sent them with just the Docker container ID (which is only\n\t\t\t\t\t\tavailable on the container instance), we recommend that you specify a prefix\n\t\t\t\t\t\twith this option.
\n For Amazon ECS services, you can use the service name as the prefix. Doing so,\n\t\t\t\t\t\tyou can trace log streams to the service that the container belongs to, the\n\t\t\t\t\t\tname of the container that sent them, and the ID of the task that the\n\t\t\t\t\t\tcontainer belongs to.
\n You must specify a stream-prefix for your logs to have your logs appear in\n\t\t\t\t\t\tthe Log pane when using the Amazon ECS console.
\n \n - awslogs-datetime-format
\n - \n
Required: No
\n This option defines a multiline start pattern in Python\n\t\t\t\t\t\t\tstrftime
format. A log message consists of a line that\n\t\t\t\t\t\tmatches the pattern and any following lines that don’t match the pattern.\n\t\t\t\t\t\tThe matched line is the delimiter between log messages.
\n One example of a use case for using this format is for parsing output such\n\t\t\t\t\t\tas a stack dump, which might otherwise be logged in multiple entries. The\n\t\t\t\t\t\tcorrect pattern allows it to be captured in a single entry.
\n For more information, see awslogs-datetime-format.
\n You cannot configure both the awslogs-datetime-format
and\n\t\t\t\t\t\t\tawslogs-multiline-pattern
options.
\n \n Multiline logging performs regular expression parsing and matching of\n\t\t\t\t\t\t\tall log messages. This might have a negative impact on logging\n\t\t\t\t\t\t\tperformance.
\n \n \n - awslogs-multiline-pattern
\n - \n
Required: No
\n This option defines a multiline start pattern that uses a regular\n\t\t\t\t\t\texpression. A log message consists of a line that matches the pattern and\n\t\t\t\t\t\tany following lines that don’t match the pattern. The matched line is the\n\t\t\t\t\t\tdelimiter between log messages.
\n For more information, see awslogs-multiline-pattern.
\n This option is ignored if awslogs-datetime-format
is also\n\t\t\t\t\t\tconfigured.
\n You cannot configure both the awslogs-datetime-format
and\n\t\t\t\t\t\t\tawslogs-multiline-pattern
options.
\n \n Multiline logging performs regular expression parsing and matching of\n\t\t\t\t\t\t\tall log messages. This might have a negative impact on logging\n\t\t\t\t\t\t\tperformance.
\n \n \n - mode
\n - \n
Required: No
\n Valid values: non-blocking
| blocking
\n
\n This option defines the delivery mode of log messages from the container\n\t\t\t\t\t\tto CloudWatch Logs. The delivery mode you choose affects application availability when\n\t\t\t\t\t\tthe flow of logs from container to CloudWatch is interrupted.
\n If you use the blocking
mode and the flow of logs to CloudWatch is\n\t\t\t\t\t\tinterrupted, calls from container code to write to the stdout
\n\t\t\t\t\t\tand stderr
streams will block. The logging thread of the\n\t\t\t\t\t\tapplication will block as a result. This may cause the application to become\n\t\t\t\t\t\tunresponsive and lead to container healthcheck failure.
\n If you use the non-blocking
mode, the container's logs are\n\t\t\t\t\t\tinstead stored in an in-memory intermediate buffer configured with the\n\t\t\t\t\t\t\tmax-buffer-size
option. This prevents the application from\n\t\t\t\t\t\tbecoming unresponsive when logs cannot be sent to CloudWatch. We recommend using\n\t\t\t\t\t\tthis mode if you want to ensure service availability and are okay with some\n\t\t\t\t\t\tlog loss. For more information, see Preventing log loss with non-blocking mode in the awslogs
\n\t\t\t\t\t\t\tcontainer log driver.
\n \n - max-buffer-size
\n - \n
Required: No
\n Default value: 1m
\n
\n When non-blocking
mode is used, the\n\t\t\t\t\t\t\tmax-buffer-size
log option controls the size of the buffer\n\t\t\t\t\t\tthat's used for intermediate message storage. Make sure to specify an\n\t\t\t\t\t\tadequate buffer size based on your application. When the buffer fills up,\n\t\t\t\t\t\tfurther logs cannot be stored. Logs that cannot be stored are lost.
\n \n
\n To route logs using the splunk
log router, you need to specify a\n\t\t\t\tsplunk-token
and a splunk-url
.
\n When you use the awsfirelens
log router to route logs to an Amazon Web Services Service\n\t\t\tor Amazon Web Services Partner Network destination for log storage and analytics, you can set the\n\t\t\t\tlog-driver-buffer-limit
option to limit the number of events that are\n\t\t\tbuffered in memory, before being sent to the log router container. It can help to\n\t\t\tresolve potential log loss issue because high throughput might result in memory running\n\t\t\tout for the buffer inside of Docker.
\n Other options you can specify when using awsfirelens
to route logs depend\n\t\t\ton the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region\n\t\t\twith region
and a name for the log stream with\n\t\t\tdelivery_stream
.
\n When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with\n\t\t\t\tregion
and a data stream name with stream
.
\n When you export logs to Amazon OpenSearch Service, you can specify options like Name
,\n\t\t\t\tHost
(OpenSearch Service endpoint without protocol), Port
,\n\t\t\t\tIndex
, Type
, Aws_auth
,\n\t\t\t\tAws_region
, Suppress_Type_Name
, and\n\t\t\ttls
.
\n When you export logs to Amazon S3, you can specify the bucket using the bucket
\n\t\t\toption. You can also specify region
, total_file_size
,\n\t\t\t\tupload_timeout
, and use_put_object
as options.
\n This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
\n
"
+ "smithy.api#documentation": "The configuration options to send to the log driver.
\n The options you can specify depend on the log driver. Some of the options you can\n\t\t\tspecify when you use the awslogs
log driver to route logs to Amazon CloudWatch\n\t\t\tinclude the following:
\n \n - awslogs-create-group
\n - \n
Required: No
\n Specify whether you want the log group to be created automatically. If\n\t\t\t\t\t\tthis option isn't specified, it defaults to false
.
\n \n Your IAM policy must include the logs:CreateLogGroup
\n\t\t\t\t\t\t\tpermission before you attempt to use\n\t\t\t\t\t\t\tawslogs-create-group
.
\n \n \n - awslogs-region
\n - \n
Required: Yes
\n Specify the Amazon Web Services Region that the awslogs
log driver is to\n\t\t\t\t\t\tsend your Docker logs to. You can choose to send all of your logs from\n\t\t\t\t\t\tclusters in different Regions to a single region in CloudWatch Logs. This is so that\n\t\t\t\t\t\tthey're all visible in one location. Otherwise, you can separate them by\n\t\t\t\t\t\tRegion for more granularity. Make sure that the specified log group exists\n\t\t\t\t\t\tin the Region that you specify with this option.
\n \n - awslogs-group
\n - \n
Required: Yes
\n Make sure to specify a log group that the awslogs
log driver\n\t\t\t\t\t\tsends its log streams to.
\n \n - awslogs-stream-prefix
\n - \n
Required: Yes, when using the Fargate launch\n\t\t\t\t\t\t\ttype.Optional for the EC2 launch type,\n\t\t\t\t\t\t\trequired for the Fargate launch type.
\n Use the awslogs-stream-prefix
option to associate a log\n\t\t\t\t\t\tstream with the specified prefix, the container name, and the ID of the\n\t\t\t\t\t\tAmazon ECS task that the container belongs to. If you specify a prefix with this\n\t\t\t\t\t\toption, then the log stream takes the format\n\t\t\t\t\t\t\tprefix-name/container-name/ecs-task-id
.
\n If you don't specify a prefix with this option, then the log stream is\n\t\t\t\t\t\tnamed after the container ID that's assigned by the Docker daemon on the\n\t\t\t\t\t\tcontainer instance. Because it's difficult to trace logs back to the\n\t\t\t\t\t\tcontainer that sent them with just the Docker container ID (which is only\n\t\t\t\t\t\tavailable on the container instance), we recommend that you specify a prefix\n\t\t\t\t\t\twith this option.
\n For Amazon ECS services, you can use the service name as the prefix. Doing so,\n\t\t\t\t\t\tyou can trace log streams to the service that the container belongs to, the\n\t\t\t\t\t\tname of the container that sent them, and the ID of the task that the\n\t\t\t\t\t\tcontainer belongs to.
\n You must specify a stream-prefix for your logs to have your logs appear in\n\t\t\t\t\t\tthe Log pane when using the Amazon ECS console.
\n \n - awslogs-datetime-format
\n - \n
Required: No
\n This option defines a multiline start pattern in Python\n\t\t\t\t\t\t\tstrftime
format. A log message consists of a line that\n\t\t\t\t\t\tmatches the pattern and any following lines that don’t match the pattern.\n\t\t\t\t\t\tThe matched line is the delimiter between log messages.
\n One example of a use case for using this format is for parsing output such\n\t\t\t\t\t\tas a stack dump, which might otherwise be logged in multiple entries. The\n\t\t\t\t\t\tcorrect pattern allows it to be captured in a single entry.
\n For more information, see awslogs-datetime-format.
\n You cannot configure both the awslogs-datetime-format
and\n\t\t\t\t\t\t\tawslogs-multiline-pattern
options.
\n \n Multiline logging performs regular expression parsing and matching of\n\t\t\t\t\t\t\tall log messages. This might have a negative impact on logging\n\t\t\t\t\t\t\tperformance.
\n \n \n - awslogs-multiline-pattern
\n - \n
Required: No
\n This option defines a multiline start pattern that uses a regular\n\t\t\t\t\t\texpression. A log message consists of a line that matches the pattern and\n\t\t\t\t\t\tany following lines that don’t match the pattern. The matched line is the\n\t\t\t\t\t\tdelimiter between log messages.
\n For more information, see awslogs-multiline-pattern.
\n This option is ignored if awslogs-datetime-format
is also\n\t\t\t\t\t\tconfigured.
\n You cannot configure both the awslogs-datetime-format
and\n\t\t\t\t\t\t\tawslogs-multiline-pattern
options.
\n \n Multiline logging performs regular expression parsing and matching of\n\t\t\t\t\t\t\tall log messages. This might have a negative impact on logging\n\t\t\t\t\t\t\tperformance.
\n \n \n - mode
\n - \n
Required: No
\n Valid values: non-blocking
| blocking
\n
\n This option defines the delivery mode of log messages from the container\n\t\t\t\t\t\tto CloudWatch Logs. The delivery mode you choose affects application availability when\n\t\t\t\t\t\tthe flow of logs from container to CloudWatch is interrupted.
\n If you use the blocking
mode and the flow of logs to CloudWatch is\n\t\t\t\t\t\tinterrupted, calls from container code to write to the stdout
\n\t\t\t\t\t\tand stderr
streams will block. The logging thread of the\n\t\t\t\t\t\tapplication will block as a result. This may cause the application to become\n\t\t\t\t\t\tunresponsive and lead to container healthcheck failure.
\n If you use the non-blocking
mode, the container's logs are\n\t\t\t\t\t\tinstead stored in an in-memory intermediate buffer configured with the\n\t\t\t\t\t\t\tmax-buffer-size
option. This prevents the application from\n\t\t\t\t\t\tbecoming unresponsive when logs cannot be sent to CloudWatch. We recommend using\n\t\t\t\t\t\tthis mode if you want to ensure service availability and are okay with some\n\t\t\t\t\t\tlog loss. For more information, see Preventing log loss with non-blocking mode in the awslogs
\n\t\t\t\t\t\t\tcontainer log driver.
\n \n - max-buffer-size
\n - \n
Required: No
\n Default value: 1m
\n
\n When non-blocking
mode is used, the\n\t\t\t\t\t\t\tmax-buffer-size
log option controls the size of the buffer\n\t\t\t\t\t\tthat's used for intermediate message storage. Make sure to specify an\n\t\t\t\t\t\tadequate buffer size based on your application. When the buffer fills up,\n\t\t\t\t\t\tfurther logs cannot be stored. Logs that cannot be stored are lost.
\n \n
\n To route logs using the splunk
log router, you need to specify a\n\t\t\t\tsplunk-token
and a splunk-url
.
\n When you use the awsfirelens
log router to route logs to an Amazon Web Services Service\n\t\t\tor Amazon Web Services Partner Network destination for log storage and analytics, you can set the\n\t\t\t\tlog-driver-buffer-limit
option to limit the number of events that are\n\t\t\tbuffered in memory, before being sent to the log router container. It can help to\n\t\t\tresolve potential log loss issue because high throughput might result in memory running\n\t\t\tout for the buffer inside of Docker.
\n Other options you can specify when using awsfirelens
to route logs depend\n\t\t\ton the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region\n\t\t\twith region
and a name for the log stream with\n\t\t\tdelivery_stream
.
\n When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with\n\t\t\t\tregion
and a data stream name with stream
.
\n When you export logs to Amazon OpenSearch Service, you can specify options like Name
,\n\t\t\t\tHost
(OpenSearch Service endpoint without protocol), Port
,\n\t\t\t\tIndex
, Type
, Aws_auth
,\n\t\t\t\tAws_region
, Suppress_Type_Name
, and\n\t\t\ttls
. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
\n When you export logs to Amazon S3, you can specify the bucket using the bucket
\n\t\t\toption. You can also specify region
, total_file_size
,\n\t\t\t\tupload_timeout
, and use_put_object
as options.
\n This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
\n
"
}
},
"secretOptions": {
@@ -9635,6 +9635,12 @@
"traits": {
"smithy.api#documentation": "The operating system that your tasks definitions run on. A platform family is\n\t\t\tspecified only for tasks using the Fargate launch type.
"
}
+ },
+ "enableFaultInjection": {
+ "target": "com.amazonaws.ecs#BoxedBoolean",
+ "traits": {
+ "smithy.api#documentation": "Enables fault injection when you register your task definition and allows for fault injection requests \n\t\t\tto be accepted from the task's containers. The default value is false
.
"
+ }
}
},
"traits": {
@@ -12519,6 +12525,12 @@
"traits": {
"smithy.api#documentation": "The ephemeral storage settings to use for tasks run with the task definition.
"
}
+ },
+ "enableFaultInjection": {
+ "target": "com.amazonaws.ecs#BoxedBoolean",
+ "traits": {
+ "smithy.api#documentation": "Enables fault injection and allows for fault injection requests to be accepted from the task's containers. \n\t\t\tThe default value is false
.
"
+ }
}
},
"traits": {
diff --git a/codegen/sdk-codegen/aws-models/eks.json b/codegen/sdk-codegen/aws-models/eks.json
index 06f60c4a44e..d4cc65bd5e9 100644
--- a/codegen/sdk-codegen/aws-models/eks.json
+++ b/codegen/sdk-codegen/aws-models/eks.json
@@ -3843,6 +3843,12 @@
"smithy.api#documentation": "The node group update configuration.
"
}
},
+ "nodeRepairConfig": {
+ "target": "com.amazonaws.eks#NodeRepairConfig",
+ "traits": {
+ "smithy.api#documentation": "The node auto repair configuration for the node group.
"
+ }
+ },
"capacityType": {
"target": "com.amazonaws.eks#CapacityTypes",
"traits": {
@@ -8400,6 +8406,20 @@
"smithy.api#documentation": "Information about an Amazon EKS add-on from the Amazon Web Services Marketplace.
"
}
},
+ "com.amazonaws.eks#NodeRepairConfig": {
+ "type": "structure",
+ "members": {
+ "enabled": {
+ "target": "com.amazonaws.eks#BoxedBoolean",
+ "traits": {
+ "smithy.api#documentation": "Specifies whether to enable node auto repair for the node group. Node auto repair is \n disabled by default.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The node auto repair configuration for the node group.
"
+ }
+ },
"com.amazonaws.eks#Nodegroup": {
"type": "structure",
"members": {
@@ -8529,6 +8549,12 @@
"smithy.api#documentation": "The node group update configuration.
"
}
},
+ "nodeRepairConfig": {
+ "target": "com.amazonaws.eks#NodeRepairConfig",
+ "traits": {
+ "smithy.api#documentation": "The node auto repair configuration for the node group.
"
+ }
+ },
"launchTemplate": {
"target": "com.amazonaws.eks#LaunchTemplateSpecification",
"traits": {
@@ -9383,13 +9409,13 @@
"remoteNodeNetworks": {
"target": "com.amazonaws.eks#RemoteNodeNetworkList",
"traits": {
- "smithy.api#documentation": "The list of network CIDRs that can contain hybrid nodes.
"
+ "smithy.api#documentation": "The list of network CIDRs that can contain hybrid nodes.
\n These CIDR blocks define the expected IP address range of the hybrid nodes that join\n the cluster. These blocks are typically determined by your network administrator.
\n Enter one or more IPv4 CIDR blocks in decimal dotted-quad notation (for example, \n 10.2.0.0/16
).
\n It must satisfy the following requirements:
\n \n - \n
Each block must be within an IPv4
RFC-1918 network range. Minimum\n allowed size is /24, maximum allowed size is /8. Publicly-routable addresses\n aren't supported.
\n \n - \n
Each block cannot overlap with the range of the VPC CIDR blocks for your EKS\n resources, or the block of the Kubernetes service IP range.
\n \n - \n
Each block must have a route to the VPC that uses the VPC CIDR blocks, not\n public IPs or Elastic IPs. There are many options including Transit Gateway,\n Site-to-Site VPN, or Direct Connect.
\n \n - \n
Each host must allow outbound connection to the EKS cluster control plane on\n TCP ports 443
and 10250
.
\n \n - \n
Each host must allow inbound connection from the EKS cluster control plane on\n TCP port 10250 for logs, exec and port-forward operations.
\n \n - \n
Each host must allow TCP and UDP network connectivity to and from other hosts\n that are running CoreDNS
on UDP port 53
for service and pod DNS\n names.
\n \n
"
}
},
"remotePodNetworks": {
"target": "com.amazonaws.eks#RemotePodNetworkList",
"traits": {
- "smithy.api#documentation": "The list of network CIDRs that can contain pods that run Kubernetes webhooks on hybrid nodes.
"
+ "smithy.api#documentation": "The list of network CIDRs that can contain pods that run Kubernetes webhooks on hybrid nodes.
\n These CIDR blocks are determined by configuring your Container Network Interface (CNI)\n plugin. We recommend the Calico CNI or Cilium CNI. Note that the Amazon VPC CNI plugin for Kubernetes isn't\n available for on-premises and edge locations.
\n Enter one or more IPv4 CIDR blocks in decimal dotted-quad notation (for example, \n 10.2.0.0/16
).
\n It must satisfy the following requirements:
\n \n - \n
Each block must be within an IPv4
RFC-1918 network range. Minimum\n allowed size is /24, maximum allowed size is /8. Publicly-routable addresses\n aren't supported.
\n \n - \n
Each block cannot overlap with the range of the VPC CIDR blocks for your EKS\n resources, or the block of the Kubernetes service IP range.
\n \n
"
}
}
},
@@ -9423,12 +9449,12 @@
"cidrs": {
"target": "com.amazonaws.eks#StringList",
"traits": {
- "smithy.api#documentation": "A network CIDR that can contain hybrid nodes.
"
+ "smithy.api#documentation": "A network CIDR that can contain hybrid nodes.
\n These CIDR blocks define the expected IP address range of the hybrid nodes that join\n the cluster. These blocks are typically determined by your network administrator.
\n Enter one or more IPv4 CIDR blocks in decimal dotted-quad notation (for example, \n 10.2.0.0/16
).
\n It must satisfy the following requirements:
\n \n - \n
Each block must be within an IPv4
RFC-1918 network range. Minimum\n allowed size is /24, maximum allowed size is /8. Publicly-routable addresses\n aren't supported.
\n \n - \n
Each block cannot overlap with the range of the VPC CIDR blocks for your EKS\n resources, or the block of the Kubernetes service IP range.
\n \n - \n
Each block must have a route to the VPC that uses the VPC CIDR blocks, not\n public IPs or Elastic IPs. There are many options including Transit Gateway,\n Site-to-Site VPN, or Direct Connect.
\n \n - \n
Each host must allow outbound connection to the EKS cluster control plane on\n TCP ports 443
and 10250
.
\n \n - \n
Each host must allow inbound connection from the EKS cluster control plane on\n TCP port 10250 for logs, exec and port-forward operations.
\n \n - \n
Each host must allow TCP and UDP network connectivity to and from other hosts\n that are running CoreDNS
on UDP port 53
for service and pod DNS\n names.
\n \n
"
}
}
},
"traits": {
- "smithy.api#documentation": "A network CIDR that can contain hybrid nodes.
"
+ "smithy.api#documentation": "A network CIDR that can contain hybrid nodes.
\n These CIDR blocks define the expected IP address range of the hybrid nodes that join\n the cluster. These blocks are typically determined by your network administrator.
\n Enter one or more IPv4 CIDR blocks in decimal dotted-quad notation (for example, \n 10.2.0.0/16
).
\n It must satisfy the following requirements:
\n \n - \n
Each block must be within an IPv4
RFC-1918 network range. Minimum\n allowed size is /24, maximum allowed size is /8. Publicly-routable addresses\n aren't supported.
\n \n - \n
Each block cannot overlap with the range of the VPC CIDR blocks for your EKS\n resources, or the block of the Kubernetes service IP range.
\n \n - \n
Each block must have a route to the VPC that uses the VPC CIDR blocks, not\n public IPs or Elastic IPs. There are many options including Transit Gateway,\n Site-to-Site VPN, or Direct Connect.
\n \n - \n
Each host must allow outbound connection to the EKS cluster control plane on\n TCP ports 443
and 10250
.
\n \n - \n
Each host must allow inbound connection from the EKS cluster control plane on\n TCP port 10250 for logs, exec and port-forward operations.
\n \n - \n
Each host must allow TCP and UDP network connectivity to and from other hosts\n that are running CoreDNS
on UDP port 53
for service and pod DNS\n names.
\n \n
"
}
},
"com.amazonaws.eks#RemoteNodeNetworkList": {
@@ -9449,12 +9475,12 @@
"cidrs": {
"target": "com.amazonaws.eks#StringList",
"traits": {
- "smithy.api#documentation": "A network CIDR that can contain pods that run Kubernetes webhooks on hybrid nodes.
"
+ "smithy.api#documentation": "A network CIDR that can contain pods that run Kubernetes webhooks on hybrid nodes.
\n These CIDR blocks are determined by configuring your Container Network Interface (CNI)\n plugin. We recommend the Calico CNI or Cilium CNI. Note that the Amazon VPC CNI plugin for Kubernetes isn't\n available for on-premises and edge locations.
\n Enter one or more IPv4 CIDR blocks in decimal dotted-quad notation (for example, \n 10.2.0.0/16
).
\n It must satisfy the following requirements:
\n \n - \n
Each block must be within an IPv4
RFC-1918 network range. Minimum\n allowed size is /24, maximum allowed size is /8. Publicly-routable addresses\n aren't supported.
\n \n - \n
Each block cannot overlap with the range of the VPC CIDR blocks for your EKS\n resources, or the block of the Kubernetes service IP range.
\n \n
"
}
}
},
"traits": {
- "smithy.api#documentation": "A network CIDR that can contain pods that run Kubernetes webhooks on hybrid nodes.
"
+ "smithy.api#documentation": "A network CIDR that can contain pods that run Kubernetes webhooks on hybrid nodes.
\n These CIDR blocks are determined by configuring your Container Network Interface (CNI)\n plugin. We recommend the Calico CNI or Cilium CNI. Note that the Amazon VPC CNI plugin for Kubernetes isn't\n available for on-premises and edge locations.
\n Enter one or more IPv4 CIDR blocks in decimal dotted-quad notation (for example, \n 10.2.0.0/16
).
\n It must satisfy the following requirements:
\n \n - \n
Each block must be within an IPv4
RFC-1918 network range. Minimum\n allowed size is /24, maximum allowed size is /8. Publicly-routable addresses\n aren't supported.
\n \n - \n
Each block cannot overlap with the range of the VPC CIDR blocks for your EKS\n resources, or the block of the Kubernetes service IP range.
\n \n
"
}
},
"com.amazonaws.eks#RemotePodNetworkList": {
@@ -10614,6 +10640,12 @@
"smithy.api#documentation": "The node group update configuration.
"
}
},
+ "nodeRepairConfig": {
+ "target": "com.amazonaws.eks#NodeRepairConfig",
+ "traits": {
+ "smithy.api#documentation": "The node auto repair configuration for the node group.
"
+ }
+ },
"clientRequestToken": {
"target": "com.amazonaws.eks#String",
"traits": {
@@ -10902,6 +10934,12 @@
"smithy.api#enumValue": "MaxUnavailablePercentage"
}
},
+ "NODE_REPAIR_ENABLED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "NodeRepairEnabled"
+ }
+ },
"CONFIGURATION_VALUES": {
"target": "smithy.api#Unit",
"traits": {
diff --git a/codegen/sdk-codegen/aws-models/glue.json b/codegen/sdk-codegen/aws-models/glue.json
index b8a441b21ee..37d25bc7bda 100644
--- a/codegen/sdk-codegen/aws-models/glue.json
+++ b/codegen/sdk-codegen/aws-models/glue.json
@@ -10749,7 +10749,7 @@
"WorkerType": {
"target": "com.amazonaws.glue#WorkerType",
"traits": {
- "smithy.api#documentation": "The type of predefined worker that is allocated when a job runs. Accepts a value of\n G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
\n \n - \n
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
\n \n - \n
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X
worker type.
\n \n - \n
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
\n \n - \n
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
\n \n
"
+ "smithy.api#documentation": "The type of predefined worker that is allocated when a job runs. Accepts a value of\n G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
\n \n - \n
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 94GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 138GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
\n \n - \n
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X
worker type.
\n \n - \n
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk, and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 or later streaming jobs.
\n \n - \n
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk, and provides up to 8 Ray workers based on the autoscaler.
\n \n
"
}
},
"CodeGenConfigurationNodes": {
@@ -11627,7 +11627,7 @@
"WorkerType": {
"target": "com.amazonaws.glue#WorkerType",
"traits": {
- "smithy.api#documentation": "The type of predefined worker that is allocated when a job runs. Accepts a value of\n G.1X, G.2X, G.4X, or G.8X for Spark jobs. Accepts the value Z.2X for Ray notebooks.
\n \n - \n
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
\n \n - \n
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X
worker type.
\n \n - \n
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
\n \n
"
+ "smithy.api#documentation": "The type of predefined worker that is allocated when a job runs. Accepts a value of\n G.1X, G.2X, G.4X, or G.8X for Spark jobs. Accepts the value Z.2X for Ray notebooks.
\n \n - \n
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 94GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 138GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
\n \n - \n
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X
worker type.
\n \n - \n
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk, and provides up to 8 Ray workers based on the autoscaler.
\n \n
"
}
},
"SecurityConfiguration": {
@@ -11894,7 +11894,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Creates a new trigger.
"
+ "smithy.api#documentation": "Creates a new trigger.
\n Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
"
}
},
"com.amazonaws.glue#CreateTriggerRequest": {
@@ -12184,7 +12184,7 @@
"DefaultRunProperties": {
"target": "com.amazonaws.glue#WorkflowRunProperties",
"traits": {
- "smithy.api#documentation": "A collection of properties to be used as part of each execution of the workflow.
"
+ "smithy.api#documentation": "A collection of properties to be used as part of each execution of the workflow.
\n Run properties may be logged. Do not pass plaintext secrets as properties. Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or other secret management mechanism if you intend to use them within the workflow run.
"
}
},
"Tags": {
@@ -12915,6 +12915,43 @@
}
}
},
+ "com.amazonaws.glue#DataQualityEncryption": {
+ "type": "structure",
+ "members": {
+ "DataQualityEncryptionMode": {
+ "target": "com.amazonaws.glue#DataQualityEncryptionMode",
+ "traits": {
+ "smithy.api#documentation": "The encryption mode to use for encrypting Data Quality assets. These assets include data quality rulesets, results, statistics, anomaly detection models and observations.
\n Valid values are SSEKMS
for encryption using a customer-managed KMS key, or DISABLED
.
"
+ }
+ },
+ "KmsKeyArn": {
+ "target": "com.amazonaws.glue#KmsKeyArn",
+ "traits": {
+ "smithy.api#documentation": "The Amazon Resource Name (ARN) of the KMS key to be used to encrypt the data.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Specifies how Data Quality assets in your account should be encrypted.
"
+ }
+ },
+ "com.amazonaws.glue#DataQualityEncryptionMode": {
+ "type": "enum",
+ "members": {
+ "DISABLED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "DISABLED"
+ }
+ },
+ "SSEKMS": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "SSE-KMS"
+ }
+ }
+ }
+ },
"com.amazonaws.glue#DataQualityEvaluationRunAdditionalRunOptions": {
"type": "structure",
"members": {
@@ -17152,6 +17189,12 @@
"traits": {
"smithy.api#documentation": "The encryption configuration for job bookmarks.
"
}
+ },
+ "DataQualityEncryption": {
+ "target": "com.amazonaws.glue#DataQualityEncryption",
+ "traits": {
+ "smithy.api#documentation": "The encryption configuration for Glue Data Quality assets.
"
+ }
}
},
"traits": {
@@ -21378,7 +21421,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Retrieves the metadata for a given job run. Job run history is accessible for 90 days for your workflow and job run.
"
+ "smithy.api#documentation": "Retrieves the metadata for a given job run. Job run history is accessible for 365 days for your workflow and job run.
"
}
},
"com.amazonaws.glue#GetJobRunRequest": {
@@ -21447,7 +21490,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Retrieves metadata for all runs of a given job definition.
",
+ "smithy.api#documentation": "Retrieves metadata for all runs of a given job definition.
\n \n GetJobRuns
returns the job runs in chronological order, with the newest jobs returned first.
",
"smithy.api#paginated": {
"inputToken": "NextToken",
"outputToken": "NextToken",
@@ -27027,7 +27070,7 @@
"WorkerType": {
"target": "com.amazonaws.glue#WorkerType",
"traits": {
- "smithy.api#documentation": "The type of predefined worker that is allocated when a job runs. Accepts a value of\n G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
\n \n - \n
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
\n \n - \n
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X
worker type.
\n \n - \n
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
\n \n - \n
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
\n \n
"
+ "smithy.api#documentation": "The type of predefined worker that is allocated when a job runs. Accepts a value of\n G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
\n \n - \n
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 94GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 138GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
\n \n - \n
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X
worker type.
\n \n - \n
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk, and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 or later streaming jobs.
\n \n - \n
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk, and provides up to 8 Ray workers based on the autoscaler.
\n \n
"
}
},
"NumberOfWorkers": {
@@ -27383,7 +27426,7 @@
"WorkerType": {
"target": "com.amazonaws.glue#WorkerType",
"traits": {
- "smithy.api#documentation": "The type of predefined worker that is allocated when a job runs. Accepts a value of\n G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
\n \n - \n
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
\n \n - \n
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X
worker type.
\n \n - \n
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
\n \n - \n
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
\n \n
"
+ "smithy.api#documentation": "The type of predefined worker that is allocated when a job runs. Accepts a value of\n G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
\n \n - \n
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 94GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 138GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
\n \n - \n
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X
worker type.
\n \n - \n
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk, and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 or later streaming jobs.
\n \n - \n
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk, and provides up to 8 Ray workers based on the autoscaler.
\n \n
"
}
},
"NumberOfWorkers": {
@@ -27617,7 +27660,7 @@
"WorkerType": {
"target": "com.amazonaws.glue#WorkerType",
"traits": {
- "smithy.api#documentation": "The type of predefined worker that is allocated when a job runs. Accepts a value of\n G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
\n \n - \n
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
\n \n - \n
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X
worker type.
\n \n - \n
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
\n \n - \n
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
\n \n
"
+ "smithy.api#documentation": "The type of predefined worker that is allocated when a job runs. Accepts a value of\n G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
\n \n - \n
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 94GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 138GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
\n \n - \n
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X
worker type.
\n \n - \n
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk, and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 or later streaming jobs.
\n \n - \n
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk, and provides up to 8 Ray workers based on the autoscaler.
\n \n
"
}
},
"NumberOfWorkers": {
@@ -33645,7 +33688,7 @@
"RunProperties": {
"target": "com.amazonaws.glue#WorkflowRunProperties",
"traits": {
- "smithy.api#documentation": "The properties to put for the specified run.
",
+ "smithy.api#documentation": "The properties to put for the specified run.
\n Run properties may be logged. Do not pass plaintext secrets as properties. Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or other secret management mechanism if you intend to use them within the workflow run.
",
"smithy.api#required": {}
}
}
@@ -38494,7 +38537,7 @@
"WorkerType": {
"target": "com.amazonaws.glue#WorkerType",
"traits": {
- "smithy.api#documentation": "The type of predefined worker that is allocated when a job runs. Accepts a value of\n G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
\n \n - \n
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
\n \n - \n
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X
worker type.
\n \n - \n
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
\n \n - \n
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
\n \n
"
+ "smithy.api#documentation": "The type of predefined worker that is allocated when a job runs. Accepts a value of\n G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
\n \n - \n
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 94GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 138GB disk, and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
\n \n - \n
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
\n \n - \n
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk, and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the G.4X
worker type.
\n \n - \n
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk, and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 or later streaming jobs.
\n \n - \n
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk, and provides up to 8 Ray workers based on the autoscaler.
\n \n
"
}
},
"NumberOfWorkers": {
@@ -38760,7 +38803,7 @@
"RunProperties": {
"target": "com.amazonaws.glue#WorkflowRunProperties",
"traits": {
- "smithy.api#documentation": "The workflow run properties for the new workflow run.
"
+ "smithy.api#documentation": "The workflow run properties for the new workflow run.
\n Run properties may be logged. Do not pass plaintext secrets as properties. Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or other secret management mechanism if you intend to use them within the workflow run.
"
}
}
},
@@ -44096,7 +44139,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Updates a trigger definition.
"
+ "smithy.api#documentation": "Updates a trigger definition.
\n Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
"
}
},
"com.amazonaws.glue#UpdateTriggerRequest": {
@@ -44328,7 +44371,7 @@
"DefaultRunProperties": {
"target": "com.amazonaws.glue#WorkflowRunProperties",
"traits": {
- "smithy.api#documentation": "A collection of properties to be used as part of each execution of the workflow.
"
+ "smithy.api#documentation": "A collection of properties to be used as part of each execution of the workflow.
\n Run properties may be logged. Do not pass plaintext secrets as properties. Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or other secret management mechanism if you intend to use them within the workflow run.
"
}
},
"MaxConcurrentRuns": {
diff --git a/codegen/sdk-codegen/aws-models/greengrassv2.json b/codegen/sdk-codegen/aws-models/greengrassv2.json
index 3d4035e32f6..6b03e5a0772 100644
--- a/codegen/sdk-codegen/aws-models/greengrassv2.json
+++ b/codegen/sdk-codegen/aws-models/greengrassv2.json
@@ -958,6 +958,24 @@
"traits": {
"smithy.api#documentation": "The time at which the core device's status last updated, expressed in ISO 8601\n format.
"
}
+ },
+ "platform": {
+ "target": "com.amazonaws.greengrassv2#CoreDevicePlatformString",
+ "traits": {
+ "smithy.api#documentation": "The operating system platform that the core device runs.
"
+ }
+ },
+ "architecture": {
+ "target": "com.amazonaws.greengrassv2#CoreDeviceArchitectureString",
+ "traits": {
+ "smithy.api#documentation": "The computer architecture of the core device.
"
+ }
+ },
+ "runtime": {
+ "target": "com.amazonaws.greengrassv2#CoreDeviceRuntimeString",
+ "traits": {
+ "smithy.api#documentation": "The runtime for the core device. The runtime can be:
\n \n - \n
\n aws_nucleus_classic
\n
\n \n - \n
\n aws_nucleus_lite
\n
\n \n
"
+ }
}
},
"traits": {
@@ -982,6 +1000,15 @@
}
}
},
+ "com.amazonaws.greengrassv2#CoreDeviceRuntimeString": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 255
+ }
+ }
+ },
"com.amazonaws.greengrassv2#CoreDeviceStatus": {
"type": "enum",
"members": {
@@ -2425,6 +2452,12 @@
"smithy.api#documentation": "The computer architecture of the core device.
"
}
},
+ "runtime": {
+ "target": "com.amazonaws.greengrassv2#CoreDeviceRuntimeString",
+ "traits": {
+ "smithy.api#documentation": "The runtime for the core device. The runtime can be:
\n \n - \n
\n aws_nucleus_classic
\n
\n \n - \n
\n aws_nucleus_lite
\n
\n \n
"
+ }
+ },
"status": {
"target": "com.amazonaws.greengrassv2#CoreDeviceStatus",
"traits": {
@@ -4828,7 +4861,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Retrieves a paginated list of Greengrass core devices.
\n \n IoT Greengrass relies on individual devices to send status updates to the Amazon Web Services Cloud. If the\n IoT Greengrass Core software isn't running on the device, or if device isn't connected to the Amazon Web Services Cloud,\n then the reported status of that device might not reflect its current status. The status\n timestamp indicates when the device status was last updated.
\n Core devices send status updates at the following times:
\n \n - \n
When the IoT Greengrass Core software starts
\n \n - \n
When the core device receives a deployment from the Amazon Web Services Cloud
\n \n - \n
When the status of any component on the core device becomes\n BROKEN
\n
\n \n - \n
At a regular interval that you can configure, which defaults to 24 hours
\n \n - \n
For IoT Greengrass Core v2.7.0, the core device sends status updates upon local deployment and\n cloud deployment
\n \n
\n ",
+ "smithy.api#documentation": "Retrieves a paginated list of Greengrass core devices.
\n \n IoT Greengrass relies on individual devices to send status updates to the Amazon Web Services Cloud. If the\n IoT Greengrass Core software isn't running on the device, or if device isn't connected to the Amazon Web Services Cloud,\n then the reported status of that device might not reflect its current status. The status\n timestamp indicates when the device status was last updated.
\n Core devices send status updates at the following times:
\n \n - \n
When the IoT Greengrass Core software starts
\n \n - \n
When the core device receives a deployment from the Amazon Web Services Cloud
\n \n - \n
For Greengrass nucleus 2.12.2 and earlier, the core device sends status updates when the\n status of any component on the core device becomes ERRORED
or\n BROKEN
.
\n \n - \n
For Greengrass nucleus 2.12.3 and later, the core device sends status updates when the\n status of any component on the core device becomes ERRORED
,\n BROKEN
, RUNNING
, or FINISHED
.
\n \n - \n
At a regular interval that you can configure, which defaults to 24 hours
\n \n - \n
For IoT Greengrass Core v2.7.0, the core device sends status updates upon local deployment and\n cloud deployment
\n \n
\n ",
"smithy.api#http": {
"method": "GET",
"uri": "/greengrass/v2/coreDevices",
@@ -4872,6 +4905,13 @@
"smithy.api#documentation": "The token to be used for the next set of paginated results.
",
"smithy.api#httpQuery": "nextToken"
}
+ },
+ "runtime": {
+ "target": "com.amazonaws.greengrassv2#CoreDeviceRuntimeString",
+ "traits": {
+ "smithy.api#documentation": "The runtime to be used by the core device. The runtime can be:
\n \n - \n
\n aws_nucleus_classic
\n
\n \n - \n
\n aws_nucleus_lite
\n
\n \n
",
+ "smithy.api#httpQuery": "runtime"
+ }
}
},
"traits": {
diff --git a/codegen/sdk-codegen/aws-models/guardduty.json b/codegen/sdk-codegen/aws-models/guardduty.json
index 1292bbc67dc..615a4135bf3 100644
--- a/codegen/sdk-codegen/aws-models/guardduty.json
+++ b/codegen/sdk-codegen/aws-models/guardduty.json
@@ -2090,7 +2090,7 @@
"target": "com.amazonaws.guardduty#FindingCriteria",
"traits": {
"smithy.api#clientOptional": {},
- "smithy.api#documentation": "Represents the criteria to be used in the filter for querying findings.
\n You can only use the following attributes to query findings:
\n \n - \n
accountId
\n \n - \n
id
\n \n - \n
region
\n \n - \n
severity
\n To filter on the basis of severity, the API and CLI use the following input list for\n the FindingCriteria\n condition:
\n \n - \n
\n Low: [\"1\", \"2\", \"3\"]
\n
\n \n - \n
\n Medium: [\"4\", \"5\", \"6\"]
\n
\n \n - \n
\n High: [\"7\", \"8\", \"9\"]
\n
\n \n
\n For more information, see Severity\n levels for GuardDuty findings.
\n \n - \n
type
\n \n - \n
updatedAt
\n Type: ISO 8601 string format: YYYY-MM-DDTHH:MM:SS.SSSZ or YYYY-MM-DDTHH:MM:SSZ\n depending on whether the value contains milliseconds.
\n \n - \n
resource.accessKeyDetails.accessKeyId
\n \n - \n
resource.accessKeyDetails.principalId
\n \n - \n
resource.accessKeyDetails.userName
\n \n - \n
resource.accessKeyDetails.userType
\n \n - \n
resource.instanceDetails.iamInstanceProfile.id
\n \n - \n
resource.instanceDetails.imageId
\n \n - \n
resource.instanceDetails.instanceId
\n \n - \n
resource.instanceDetails.tags.key
\n \n - \n
resource.instanceDetails.tags.value
\n \n - \n
resource.instanceDetails.networkInterfaces.ipv6Addresses
\n \n - \n
resource.instanceDetails.networkInterfaces.privateIpAddresses.privateIpAddress
\n \n - \n
resource.instanceDetails.networkInterfaces.publicDnsName
\n \n - \n
resource.instanceDetails.networkInterfaces.publicIp
\n \n - \n
resource.instanceDetails.networkInterfaces.securityGroups.groupId
\n \n - \n
resource.instanceDetails.networkInterfaces.securityGroups.groupName
\n \n - \n
resource.instanceDetails.networkInterfaces.subnetId
\n \n - \n
resource.instanceDetails.networkInterfaces.vpcId
\n \n - \n
resource.instanceDetails.outpostArn
\n \n - \n
resource.resourceType
\n \n - \n
resource.s3BucketDetails.publicAccess.effectivePermissions
\n \n - \n
resource.s3BucketDetails.name
\n \n - \n
resource.s3BucketDetails.tags.key
\n \n - \n
resource.s3BucketDetails.tags.value
\n \n - \n
resource.s3BucketDetails.type
\n \n - \n
service.action.actionType
\n \n - \n
service.action.awsApiCallAction.api
\n \n - \n
service.action.awsApiCallAction.callerType
\n \n - \n
service.action.awsApiCallAction.errorCode
\n \n - \n
service.action.awsApiCallAction.remoteIpDetails.city.cityName
\n \n - \n
service.action.awsApiCallAction.remoteIpDetails.country.countryName
\n \n - \n
service.action.awsApiCallAction.remoteIpDetails.ipAddressV4
\n \n - \n
service.action.awsApiCallAction.remoteIpDetails.ipAddressV6
\n \n - \n
service.action.awsApiCallAction.remoteIpDetails.organization.asn
\n \n - \n
service.action.awsApiCallAction.remoteIpDetails.organization.asnOrg
\n \n - \n
service.action.awsApiCallAction.serviceName
\n \n - \n
service.action.dnsRequestAction.domain
\n \n - \n
service.action.dnsRequestAction.domainWithSuffix
\n \n - \n
service.action.networkConnectionAction.blocked
\n \n - \n
service.action.networkConnectionAction.connectionDirection
\n \n - \n
service.action.networkConnectionAction.localPortDetails.port
\n \n - \n
service.action.networkConnectionAction.protocol
\n \n - \n
service.action.networkConnectionAction.remoteIpDetails.city.cityName
\n \n - \n
service.action.networkConnectionAction.remoteIpDetails.country.countryName
\n \n - \n
service.action.networkConnectionAction.remoteIpDetails.ipAddressV4
\n \n - \n
service.action.networkConnectionAction.remoteIpDetails.ipAddressV6
\n \n - \n
service.action.networkConnectionAction.remoteIpDetails.organization.asn
\n \n - \n
service.action.networkConnectionAction.remoteIpDetails.organization.asnOrg
\n \n - \n
service.action.networkConnectionAction.remotePortDetails.port
\n \n - \n
service.action.awsApiCallAction.remoteAccountDetails.affiliated
\n \n - \n
service.action.kubernetesApiCallAction.remoteIpDetails.ipAddressV4
\n \n - \n
service.action.kubernetesApiCallAction.remoteIpDetails.ipAddressV6
\n \n - \n
service.action.kubernetesApiCallAction.namespace
\n \n - \n
service.action.kubernetesApiCallAction.remoteIpDetails.organization.asn
\n \n - \n
service.action.kubernetesApiCallAction.requestUri
\n \n - \n
service.action.kubernetesApiCallAction.statusCode
\n \n - \n
service.action.networkConnectionAction.localIpDetails.ipAddressV4
\n \n - \n
service.action.networkConnectionAction.localIpDetails.ipAddressV6
\n \n - \n
service.action.networkConnectionAction.protocol
\n \n - \n
service.action.awsApiCallAction.serviceName
\n \n - \n
service.action.awsApiCallAction.remoteAccountDetails.accountId
\n \n - \n
service.additionalInfo.threatListName
\n \n - \n
service.resourceRole
\n \n - \n
resource.eksClusterDetails.name
\n \n - \n
resource.kubernetesDetails.kubernetesWorkloadDetails.name
\n \n - \n
resource.kubernetesDetails.kubernetesWorkloadDetails.namespace
\n \n - \n
resource.kubernetesDetails.kubernetesUserDetails.username
\n \n - \n
resource.kubernetesDetails.kubernetesWorkloadDetails.containers.image
\n \n - \n
resource.kubernetesDetails.kubernetesWorkloadDetails.containers.imagePrefix
\n \n - \n
service.ebsVolumeScanDetails.scanId
\n \n - \n
service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.name
\n \n - \n
service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.severity
\n \n - \n
service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.filePaths.hash
\n \n - \n
resource.ecsClusterDetails.name
\n \n - \n
resource.ecsClusterDetails.taskDetails.containers.image
\n \n - \n
resource.ecsClusterDetails.taskDetails.definitionArn
\n \n - \n
resource.containerDetails.image
\n \n - \n
resource.rdsDbInstanceDetails.dbInstanceIdentifier
\n \n - \n
resource.rdsDbInstanceDetails.dbClusterIdentifier
\n \n - \n
resource.rdsDbInstanceDetails.engine
\n \n - \n
resource.rdsDbUserDetails.user
\n \n - \n
resource.rdsDbInstanceDetails.tags.key
\n \n - \n
resource.rdsDbInstanceDetails.tags.value
\n \n - \n
service.runtimeDetails.process.executableSha256
\n \n - \n
service.runtimeDetails.process.name
\n \n - \n
service.runtimeDetails.process.name
\n \n - \n
resource.lambdaDetails.functionName
\n \n - \n
resource.lambdaDetails.functionArn
\n \n - \n
resource.lambdaDetails.tags.key
\n \n - \n
resource.lambdaDetails.tags.value
\n \n
",
+ "smithy.api#documentation": "Represents the criteria to be used in the filter for querying findings.
\n You can only use the following attributes to query findings:
\n \n - \n
accountId
\n \n - \n
id
\n \n - \n
region
\n \n - \n
severity
\n To filter on the basis of severity, the API and CLI use the following input list for\n the FindingCriteria\n condition:
\n \n - \n
\n Low: [\"1\", \"2\", \"3\"]
\n
\n \n - \n
\n Medium: [\"4\", \"5\", \"6\"]
\n
\n \n - \n
\n High: [\"7\", \"8\"]
\n
\n \n - \n
\n Critical: [\"9\", \"10\"]
\n
\n \n
\n For more information, see Findings severity levels\n in the Amazon GuardDuty User Guide.
\n \n - \n
type
\n \n - \n
updatedAt
\n Type: ISO 8601 string format: YYYY-MM-DDTHH:MM:SS.SSSZ or YYYY-MM-DDTHH:MM:SSZ\n depending on whether the value contains milliseconds.
\n \n - \n
resource.accessKeyDetails.accessKeyId
\n \n - \n
resource.accessKeyDetails.principalId
\n \n - \n
resource.accessKeyDetails.userName
\n \n - \n
resource.accessKeyDetails.userType
\n \n - \n
resource.instanceDetails.iamInstanceProfile.id
\n \n - \n
resource.instanceDetails.imageId
\n \n - \n
resource.instanceDetails.instanceId
\n \n - \n
resource.instanceDetails.tags.key
\n \n - \n
resource.instanceDetails.tags.value
\n \n - \n
resource.instanceDetails.networkInterfaces.ipv6Addresses
\n \n - \n
resource.instanceDetails.networkInterfaces.privateIpAddresses.privateIpAddress
\n \n - \n
resource.instanceDetails.networkInterfaces.publicDnsName
\n \n - \n
resource.instanceDetails.networkInterfaces.publicIp
\n \n - \n
resource.instanceDetails.networkInterfaces.securityGroups.groupId
\n \n - \n
resource.instanceDetails.networkInterfaces.securityGroups.groupName
\n \n - \n
resource.instanceDetails.networkInterfaces.subnetId
\n \n - \n
resource.instanceDetails.networkInterfaces.vpcId
\n \n - \n
resource.instanceDetails.outpostArn
\n \n - \n
resource.resourceType
\n \n - \n
resource.s3BucketDetails.publicAccess.effectivePermissions
\n \n - \n
resource.s3BucketDetails.name
\n \n - \n
resource.s3BucketDetails.tags.key
\n \n - \n
resource.s3BucketDetails.tags.value
\n \n - \n
resource.s3BucketDetails.type
\n \n - \n
service.action.actionType
\n \n - \n
service.action.awsApiCallAction.api
\n \n - \n
service.action.awsApiCallAction.callerType
\n \n - \n
service.action.awsApiCallAction.errorCode
\n \n - \n
service.action.awsApiCallAction.remoteIpDetails.city.cityName
\n \n - \n
service.action.awsApiCallAction.remoteIpDetails.country.countryName
\n \n - \n
service.action.awsApiCallAction.remoteIpDetails.ipAddressV4
\n \n - \n
service.action.awsApiCallAction.remoteIpDetails.ipAddressV6
\n \n - \n
service.action.awsApiCallAction.remoteIpDetails.organization.asn
\n \n - \n
service.action.awsApiCallAction.remoteIpDetails.organization.asnOrg
\n \n - \n
service.action.awsApiCallAction.serviceName
\n \n - \n
service.action.dnsRequestAction.domain
\n \n - \n
service.action.dnsRequestAction.domainWithSuffix
\n \n - \n
service.action.networkConnectionAction.blocked
\n \n - \n
service.action.networkConnectionAction.connectionDirection
\n \n - \n
service.action.networkConnectionAction.localPortDetails.port
\n \n - \n
service.action.networkConnectionAction.protocol
\n \n - \n
service.action.networkConnectionAction.remoteIpDetails.city.cityName
\n \n - \n
service.action.networkConnectionAction.remoteIpDetails.country.countryName
\n \n - \n
service.action.networkConnectionAction.remoteIpDetails.ipAddressV4
\n \n - \n
service.action.networkConnectionAction.remoteIpDetails.ipAddressV6
\n \n - \n
service.action.networkConnectionAction.remoteIpDetails.organization.asn
\n \n - \n
service.action.networkConnectionAction.remoteIpDetails.organization.asnOrg
\n \n - \n
service.action.networkConnectionAction.remotePortDetails.port
\n \n - \n
service.action.awsApiCallAction.remoteAccountDetails.affiliated
\n \n - \n
service.action.kubernetesApiCallAction.remoteIpDetails.ipAddressV4
\n \n - \n
service.action.kubernetesApiCallAction.remoteIpDetails.ipAddressV6
\n \n - \n
service.action.kubernetesApiCallAction.namespace
\n \n - \n
service.action.kubernetesApiCallAction.remoteIpDetails.organization.asn
\n \n - \n
service.action.kubernetesApiCallAction.requestUri
\n \n - \n
service.action.kubernetesApiCallAction.statusCode
\n \n - \n
service.action.networkConnectionAction.localIpDetails.ipAddressV4
\n \n - \n
service.action.networkConnectionAction.localIpDetails.ipAddressV6
\n \n - \n
service.action.networkConnectionAction.protocol
\n \n - \n
service.action.awsApiCallAction.serviceName
\n \n - \n
service.action.awsApiCallAction.remoteAccountDetails.accountId
\n \n - \n
service.additionalInfo.threatListName
\n \n - \n
service.resourceRole
\n \n - \n
resource.eksClusterDetails.name
\n \n - \n
resource.kubernetesDetails.kubernetesWorkloadDetails.name
\n \n - \n
resource.kubernetesDetails.kubernetesWorkloadDetails.namespace
\n \n - \n
resource.kubernetesDetails.kubernetesUserDetails.username
\n \n - \n
resource.kubernetesDetails.kubernetesWorkloadDetails.containers.image
\n \n - \n
resource.kubernetesDetails.kubernetesWorkloadDetails.containers.imagePrefix
\n \n - \n
service.ebsVolumeScanDetails.scanId
\n \n - \n
service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.name
\n \n - \n
service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.severity
\n \n - \n
service.ebsVolumeScanDetails.scanDetections.threatDetectedByName.threatNames.filePaths.hash
\n \n - \n
resource.ecsClusterDetails.name
\n \n - \n
resource.ecsClusterDetails.taskDetails.containers.image
\n \n - \n
resource.ecsClusterDetails.taskDetails.definitionArn
\n \n - \n
resource.containerDetails.image
\n \n - \n
resource.rdsDbInstanceDetails.dbInstanceIdentifier
\n \n - \n
resource.rdsDbInstanceDetails.dbClusterIdentifier
\n \n - \n
resource.rdsDbInstanceDetails.engine
\n \n - \n
resource.rdsDbUserDetails.user
\n \n - \n
resource.rdsDbInstanceDetails.tags.key
\n \n - \n
resource.rdsDbInstanceDetails.tags.value
\n \n - \n
service.runtimeDetails.process.executableSha256
\n \n - \n
service.runtimeDetails.process.name
\n \n - \n
service.runtimeDetails.process.name
\n \n - \n
resource.lambdaDetails.functionName
\n \n - \n
resource.lambdaDetails.functionArn
\n \n - \n
resource.lambdaDetails.tags.key
\n \n - \n
resource.lambdaDetails.tags.value
\n \n
",
"smithy.api#jsonName": "findingCriteria",
"smithy.api#required": {}
}
@@ -3643,7 +3643,7 @@
"target": "com.amazonaws.guardduty#Scans",
"traits": {
"smithy.api#clientOptional": {},
- "smithy.api#documentation": "Contains information about malware scans.
",
+ "smithy.api#documentation": "Contains information about malware scans associated with GuardDuty Malware Protection for EC2.
",
"smithy.api#jsonName": "scans",
"smithy.api#required": {}
}
@@ -11969,7 +11969,7 @@
"Name": {
"target": "com.amazonaws.guardduty#OrgFeatureAdditionalConfiguration",
"traits": {
- "smithy.api#documentation": "The name of the additional configuration that will be configured for the\n organization.
",
+ "smithy.api#documentation": "The name of the additional configuration that will be configured for the\n organization. These values are applicable to only Runtime Monitoring protection plan.
",
"smithy.api#jsonName": "name"
}
},
@@ -11982,7 +11982,7 @@
}
},
"traits": {
- "smithy.api#documentation": "A list of additional configurations which will be configured for the organization.
"
+ "smithy.api#documentation": "A list of additional configurations which will be configured for the organization.
\n Additional configuration applies to only GuardDuty Runtime Monitoring protection plan.
"
}
},
"com.amazonaws.guardduty#OrganizationAdditionalConfigurationResult": {
@@ -11991,7 +11991,7 @@
"Name": {
"target": "com.amazonaws.guardduty#OrgFeatureAdditionalConfiguration",
"traits": {
- "smithy.api#documentation": "The name of the additional configuration that is configured for the member accounts within\n the organization.
",
+ "smithy.api#documentation": "The name of the additional configuration that is configured for the member accounts within\n the organization. These values are applicable to only Runtime Monitoring protection plan.
",
"smithy.api#jsonName": "name"
}
},
@@ -14015,7 +14015,7 @@
"DetectorId": {
"target": "com.amazonaws.guardduty#DetectorId",
"traits": {
- "smithy.api#documentation": "The unique ID of the detector that the request is associated with.
\n To find the detectorId
in the current Region, see the\nSettings page in the GuardDuty console, or run the ListDetectors API.
",
+ "smithy.api#documentation": "The unique ID of the detector that is associated with the request.
\n To find the detectorId
in the current Region, see the\nSettings page in the GuardDuty console, or run the ListDetectors API.
",
"smithy.api#jsonName": "detectorId"
}
},
@@ -14043,7 +14043,7 @@
"FailureReason": {
"target": "com.amazonaws.guardduty#NonEmptyString",
"traits": {
- "smithy.api#documentation": "Represents the reason for FAILED scan status.
",
+ "smithy.api#documentation": "Represents the reason for FAILED
scan status.
",
"smithy.api#jsonName": "failureReason"
}
},
@@ -14119,7 +14119,7 @@
}
},
"traits": {
- "smithy.api#documentation": "Contains information about a malware scan.
"
+ "smithy.api#documentation": "Contains information about malware scans associated with GuardDuty Malware Protection for EC2.
"
}
},
"com.amazonaws.guardduty#ScanCondition": {
@@ -16396,7 +16396,7 @@
"smithy.api#deprecated": {
"message": "This field is deprecated, use AutoEnableOrganizationMembers instead"
},
- "smithy.api#documentation": "Represents whether or not to automatically enable member accounts in the organization.
\n Even though this is still supported, we recommend using\n AutoEnableOrganizationMembers
to achieve the similar results. You must provide a \n value for either autoEnableOrganizationMembers
or autoEnable
.
",
+ "smithy.api#documentation": "Represents whether to automatically enable member accounts in the organization. This\n applies to only new member accounts, not the existing member accounts. When a new account joins the organization,\n the chosen features will be enabled for them by default.
\n Even though this is still supported, we recommend using\n AutoEnableOrganizationMembers
to achieve the similar results. You must provide a \n value for either autoEnableOrganizationMembers
or autoEnable
.
",
"smithy.api#jsonName": "autoEnable"
}
},
diff --git a/codegen/sdk-codegen/aws-models/iot.json b/codegen/sdk-codegen/aws-models/iot.json
index 5c674934904..a9c2ad09c8b 100644
--- a/codegen/sdk-codegen/aws-models/iot.json
+++ b/codegen/sdk-codegen/aws-models/iot.json
@@ -468,6 +468,9 @@
{
"target": "com.amazonaws.iot#GetStatistics"
},
+ {
+ "target": "com.amazonaws.iot#GetThingConnectivityData"
+ },
{
"target": "com.amazonaws.iot#GetTopicRule"
},
@@ -6890,6 +6893,17 @@
"smithy.api#pattern": "^[a-zA-Z0-9:.]+$"
}
},
+ "com.amazonaws.iot#ConnectivityApiThingName": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 128
+ },
+ "smithy.api#pattern": "^[a-zA-Z0-9:_-]+$",
+ "smithy.api#sensitive": {}
+ }
+ },
"com.amazonaws.iot#ConnectivityTimestamp": {
"type": "long"
},
@@ -7492,7 +7506,7 @@
"roleArn": {
"target": "com.amazonaws.iot#RoleArn",
"traits": {
- "smithy.api#documentation": "The IAM role that allows access to create the command.
"
+ "smithy.api#documentation": "The IAM role that you must provide when using the AWS-IoT-FleetWise
namespace.\n The role grants IoT Device Management the permission to access IoT FleetWise resources \n for generating the payload for the command. This field is not required when you use the\n AWS-IoT
namespace.
"
}
},
"tags": {
@@ -16802,6 +16816,95 @@
"com.amazonaws.iot#DisconnectReason": {
"type": "string"
},
+ "com.amazonaws.iot#DisconnectReasonValue": {
+ "type": "enum",
+ "members": {
+ "AUTH_ERROR": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "AUTH_ERROR"
+ }
+ },
+ "CLIENT_INITIATED_DISCONNECT": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "CLIENT_INITIATED_DISCONNECT"
+ }
+ },
+ "CLIENT_ERROR": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "CLIENT_ERROR"
+ }
+ },
+ "CONNECTION_LOST": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "CONNECTION_LOST"
+ }
+ },
+ "DUPLICATE_CLIENTID": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "DUPLICATE_CLIENTID"
+ }
+ },
+ "FORBIDDEN_ACCESS": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "FORBIDDEN_ACCESS"
+ }
+ },
+ "MQTT_KEEP_ALIVE_TIMEOUT": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "MQTT_KEEP_ALIVE_TIMEOUT"
+ }
+ },
+ "SERVER_ERROR": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "SERVER_ERROR"
+ }
+ },
+ "SERVER_INITIATED_DISCONNECT": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "SERVER_INITIATED_DISCONNECT"
+ }
+ },
+ "THROTTLED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "THROTTLED"
+ }
+ },
+ "WEBSOCKET_TTL_EXPIRATION": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "WEBSOCKET_TTL_EXPIRATION"
+ }
+ },
+ "CUSTOMAUTH_TTL_EXPIRATION": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "CUSTOMAUTH_TTL_EXPIRATION"
+ }
+ },
+ "UNKNOWN": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "UNKNOWN"
+ }
+ },
+ "NONE": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "NONE"
+ }
+ }
+ }
+ },
"com.amazonaws.iot#DisplayName": {
"type": "string",
"traits": {
@@ -18414,7 +18517,7 @@
"timeToLive": {
"target": "com.amazonaws.iot#DateType",
"traits": {
- "smithy.api#documentation": "The time to live (TTL) parameter for the GetCommandExecution
API.
"
+ "smithy.api#documentation": "The time to live (TTL) parameter that indicates the duration for which executions will\n be retained in your account. The default value is six months.
"
}
}
},
@@ -18486,7 +18589,7 @@
"roleArn": {
"target": "com.amazonaws.iot#RoleArn",
"traits": {
- "smithy.api#documentation": "The IAM role that allows access to retrieve information about the command.
"
+ "smithy.api#documentation": "The IAM role that you provided when creating the command with AWS-IoT-FleetWise
\n as the namespace.
"
}
},
"createdAt": {
@@ -19605,6 +19708,94 @@
"smithy.api#output": {}
}
},
+ "com.amazonaws.iot#GetThingConnectivityData": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.iot#GetThingConnectivityDataRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.iot#GetThingConnectivityDataResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.iot#IndexNotReadyException"
+ },
+ {
+ "target": "com.amazonaws.iot#InternalFailureException"
+ },
+ {
+ "target": "com.amazonaws.iot#InvalidRequestException"
+ },
+ {
+ "target": "com.amazonaws.iot#ResourceNotFoundException"
+ },
+ {
+ "target": "com.amazonaws.iot#ServiceUnavailableException"
+ },
+ {
+ "target": "com.amazonaws.iot#ThrottlingException"
+ },
+ {
+ "target": "com.amazonaws.iot#UnauthorizedException"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Retrieves the live connectivity status per device.
",
+ "smithy.api#http": {
+ "method": "POST",
+ "uri": "/things/{thingName}/connectivity-data",
+ "code": 200
+ }
+ }
+ },
+ "com.amazonaws.iot#GetThingConnectivityDataRequest": {
+ "type": "structure",
+ "members": {
+ "thingName": {
+ "target": "com.amazonaws.iot#ConnectivityApiThingName",
+ "traits": {
+ "smithy.api#documentation": "The name of your IoT thing.
",
+ "smithy.api#httpLabel": {},
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.iot#GetThingConnectivityDataResponse": {
+ "type": "structure",
+ "members": {
+ "thingName": {
+ "target": "com.amazonaws.iot#ConnectivityApiThingName",
+ "traits": {
+ "smithy.api#documentation": "The name of your IoT thing.
"
+ }
+ },
+ "connected": {
+ "target": "com.amazonaws.iot#Boolean",
+ "traits": {
+ "smithy.api#documentation": "A Boolean that indicates the connectivity status.
"
+ }
+ },
+ "timestamp": {
+ "target": "com.amazonaws.iot#Timestamp",
+ "traits": {
+ "smithy.api#documentation": "The timestamp of when the event occurred.
"
+ }
+ },
+ "disconnectReason": {
+ "target": "com.amazonaws.iot#DisconnectReasonValue",
+ "traits": {
+ "smithy.api#documentation": "The reason why the client is disconnecting.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.iot#GetTopicRule": {
"type": "operation",
"input": {
@@ -22712,7 +22903,7 @@
}
],
"traits": {
- "smithy.api#documentation": "List all command executions.
\n \n You must provide only the\n startedTimeFilter
or the completedTimeFilter
information. If you \n provide both time filters, the API will generate an error.\n You can use this information to find command executions that started within\n a specific timeframe.
\n ",
+ "smithy.api#documentation": "List all command executions.
\n \n \n - \n
You must provide only the startedTimeFilter
or \n the completedTimeFilter
information. If you provide \n both time filters, the API will generate an error. You can use \n this information to retrieve a list of command executions \n within a specific timeframe.
\n \n - \n
You must provide only the commandArn
or \n the thingArn
information depending on whether you want\n to list executions for a specific command or an IoT thing. If you provide \n both fields, the API will generate an error.
\n \n
\n For more information about considerations for using this API, see\n List\n command executions in your account (CLI).
\n ",
"smithy.api#http": {
"method": "POST",
"uri": "/command-executions",
diff --git a/codegen/sdk-codegen/aws-models/m2.json b/codegen/sdk-codegen/aws-models/m2.json
index 3a9fc2f4fc4..f3088202959 100644
--- a/codegen/sdk-codegen/aws-models/m2.json
+++ b/codegen/sdk-codegen/aws-models/m2.json
@@ -1974,6 +1974,12 @@
"smithy.api#documentation": "Configures the maintenance window that you want for the runtime environment. The maintenance window must have the format ddd:hh24:mi-ddd:hh24:mi
and must be less than 24 hours. The following two examples are valid maintenance windows: sun:23:45-mon:00:15
or sat:01:00-sat:03:00
.
\n If you do not provide a value, a random system-generated value will be assigned.
"
}
},
+ "networkType": {
+ "target": "com.amazonaws.m2#NetworkType",
+ "traits": {
+ "smithy.api#documentation": "The network type required for the runtime environment.
"
+ }
+ },
"clientToken": {
"target": "com.amazonaws.m2#ClientToken",
"traits": {
@@ -2853,6 +2859,12 @@
"smithy.api#documentation": "The timestamp when the runtime environment was created.
",
"smithy.api#required": {}
}
+ },
+ "networkType": {
+ "target": "com.amazonaws.m2#NetworkType",
+ "traits": {
+ "smithy.api#documentation": "The network type supported by the runtime environment.
"
+ }
}
},
"traits": {
@@ -3960,6 +3972,12 @@
"traits": {
"smithy.api#documentation": "The identifier of a customer managed key.
"
}
+ },
+ "networkType": {
+ "target": "com.amazonaws.m2#NetworkType",
+ "traits": {
+ "smithy.api#documentation": "The network type supported by the runtime environment.
"
+ }
}
}
},
@@ -5289,6 +5307,21 @@
}
}
},
+ "com.amazonaws.m2#NetworkType": {
+ "type": "string",
+ "traits": {
+ "smithy.api#enum": [
+ {
+ "value": "ipv4",
+ "name": "IPV4"
+ },
+ {
+ "value": "dual",
+ "name": "DUAL"
+ }
+ ]
+ }
+ },
"com.amazonaws.m2#NextToken": {
"type": "string",
"traits": {
diff --git a/codegen/sdk-codegen/aws-models/mediaconnect.json b/codegen/sdk-codegen/aws-models/mediaconnect.json
index 69a8937f51a..191555a3fba 100644
--- a/codegen/sdk-codegen/aws-models/mediaconnect.json
+++ b/codegen/sdk-codegen/aws-models/mediaconnect.json
@@ -137,6 +137,12 @@
"smithy.api#required": {}
}
},
+ "MulticastSourceSettings": {
+ "target": "com.amazonaws.mediaconnect#MulticastSourceSettings",
+ "traits": {
+ "smithy.api#jsonName": "multicastSourceSettings"
+ }
+ },
"Name": {
"target": "com.amazonaws.mediaconnect#__string",
"traits": {
@@ -1276,6 +1282,12 @@
"smithy.api#required": {}
}
},
+ "MulticastSourceSettings": {
+ "target": "com.amazonaws.mediaconnect#MulticastSourceSettings",
+ "traits": {
+ "smithy.api#jsonName": "multicastSourceSettings"
+ }
+ },
"Name": {
"target": "com.amazonaws.mediaconnect#__string",
"traits": {
@@ -6594,6 +6606,21 @@
"smithy.api#documentation": "The settings for source monitoring."
}
},
+ "com.amazonaws.mediaconnect#MulticastSourceSettings": {
+ "type": "structure",
+ "members": {
+ "MulticastSourceIp": {
+ "target": "com.amazonaws.mediaconnect#__string",
+ "traits": {
+ "smithy.api#documentation": "The IP address of the source for source-specific multicast (SSM).",
+ "smithy.api#jsonName": "multicastSourceIp"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The settings related to the multicast source."
+ }
+ },
"com.amazonaws.mediaconnect#NetworkInterfaceType": {
"type": "enum",
"members": {
@@ -9006,6 +9033,12 @@
"smithy.api#jsonName": "multicastIp"
}
},
+ "MulticastSourceSettings": {
+ "target": "com.amazonaws.mediaconnect#MulticastSourceSettings",
+ "traits": {
+ "smithy.api#jsonName": "multicastSourceSettings"
+ }
+ },
"NetworkName": {
"target": "com.amazonaws.mediaconnect#__string",
"traits": {
diff --git a/codegen/sdk-codegen/aws-models/medialive.json b/codegen/sdk-codegen/aws-models/medialive.json
index acee1eed363..4796ffa3650 100644
--- a/codegen/sdk-codegen/aws-models/medialive.json
+++ b/codegen/sdk-codegen/aws-models/medialive.json
@@ -4478,6 +4478,34 @@
"smithy.api#documentation": "Number of milliseconds to delay the output from the second pipeline.",
"smithy.api#jsonName": "sendDelayMs"
}
+ },
+ "KlvBehavior": {
+ "target": "com.amazonaws.medialive#CmafKLVBehavior",
+ "traits": {
+ "smithy.api#documentation": "If set to passthrough, passes any KLV data from the input source to this output.",
+ "smithy.api#jsonName": "klvBehavior"
+ }
+ },
+ "KlvNameModifier": {
+ "target": "com.amazonaws.medialive#__stringMax100",
+ "traits": {
+ "smithy.api#documentation": "Change the modifier that MediaLive automatically adds to the Streams() name that identifies a KLV track. The default is \"klv\", which means the default name will be Streams(klv.cmfm). Any string you enter here will replace the \"klv\" string.\\nThe modifier can only contain: numbers, letters, plus (+), minus (-), underscore (_) and period (.) and has a maximum length of 100 characters.",
+ "smithy.api#jsonName": "klvNameModifier"
+ }
+ },
+ "NielsenId3NameModifier": {
+ "target": "com.amazonaws.medialive#__stringMax100",
+ "traits": {
+ "smithy.api#documentation": "Change the modifier that MediaLive automatically adds to the Streams() name that identifies a Nielsen ID3 track. The default is \"nid3\", which means the default name will be Streams(nid3.cmfm). Any string you enter here will replace the \"nid3\" string.\\nThe modifier can only contain: numbers, letters, plus (+), minus (-), underscore (_) and period (.) and has a maximum length of 100 characters.",
+ "smithy.api#jsonName": "nielsenId3NameModifier"
+ }
+ },
+ "Scte35NameModifier": {
+ "target": "com.amazonaws.medialive#__stringMax100",
+ "traits": {
+ "smithy.api#documentation": "Change the modifier that MediaLive automatically adds to the Streams() name for a SCTE 35 track. The default is \"scte\", which means the default name will be Streams(scte.cmfm). Any string you enter here will replace the \"scte\" string.\\nThe modifier can only contain: numbers, letters, plus (+), minus (-), underscore (_) and period (.) and has a maximum length of 100 characters.",
+ "smithy.api#jsonName": "scte35NameModifier"
+ }
}
},
"traits": {
@@ -4519,6 +4547,26 @@
"smithy.api#documentation": "Cmaf Ingest Segment Length Units"
}
},
+ "com.amazonaws.medialive#CmafKLVBehavior": {
+ "type": "enum",
+ "members": {
+ "NO_PASSTHROUGH": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "NO_PASSTHROUGH"
+ }
+ },
+ "PASSTHROUGH": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "PASSTHROUGH"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Cmaf KLVBehavior"
+ }
+ },
"com.amazonaws.medialive#CmafNielsenId3Behavior": {
"type": "enum",
"members": {
@@ -14195,7 +14243,7 @@
"TimedMetadataBehavior": {
"target": "com.amazonaws.medialive#Fmp4TimedMetadataBehavior",
"traits": {
- "smithy.api#documentation": "When set to passthrough, timed metadata is passed through from input to output.",
+ "smithy.api#documentation": "Set to PASSTHROUGH to enable ID3 metadata insertion. To include metadata, you configure other parameters in the output group or individual outputs, or you add an ID3 action to the channel schedule.",
"smithy.api#jsonName": "timedMetadataBehavior"
}
}
@@ -17862,20 +17910,20 @@
"Tag": {
"target": "com.amazonaws.medialive#__string",
"traits": {
- "smithy.api#documentation": "ID3 tag to insert into each segment. Supports special keyword identifiers to substitute in segment-related values.\\nSupported keyword identifiers: https://docs.aws.amazon.com/medialive/latest/ug/variable-data-identifiers.html",
+ "smithy.api#documentation": "Complete this parameter if you want to specify only the metadata, not the entire frame. MediaLive will insert the metadata in a TXXX frame. Enter the value as plain text. You can include standard MediaLive variable data such as the current segment number.",
"smithy.api#jsonName": "tag"
}
},
"Id3": {
"target": "com.amazonaws.medialive#__string",
"traits": {
- "smithy.api#documentation": "Base64 string formatted according to the ID3 specification: http://id3.org/id3v2.4.0-structure",
+ "smithy.api#documentation": "Complete this parameter if you want to specify the entire ID3 metadata. Enter a base64 string that contains one or more fully formed ID3 tags, according to the ID3 specification: http://id3.org/id3v2.4.0-structure",
"smithy.api#jsonName": "id3"
}
}
},
"traits": {
- "smithy.api#documentation": "Settings for the action to insert a user-defined ID3 tag in each HLS segment"
+ "smithy.api#documentation": "Settings for the action to insert ID3 metadata in every segment, in HLS output groups."
}
},
"com.amazonaws.medialive#HlsId3SegmentTaggingState": {
@@ -18382,14 +18430,14 @@
"target": "com.amazonaws.medialive#__string",
"traits": {
"smithy.api#clientOptional": {},
- "smithy.api#documentation": "Base64 string formatted according to the ID3 specification: http://id3.org/id3v2.4.0-structure",
+ "smithy.api#documentation": "Enter a base64 string that contains one or more fully formed ID3 tags.See the ID3 specification: http://id3.org/id3v2.4.0-structure",
"smithy.api#jsonName": "id3",
"smithy.api#required": {}
}
}
},
"traits": {
- "smithy.api#documentation": "Settings for the action to emit HLS metadata"
+ "smithy.api#documentation": "Settings for the action to insert ID3 metadata (as a one-time action) in HLS output groups."
}
},
"com.amazonaws.medialive#HlsTsFileMode": {
@@ -23701,7 +23749,7 @@
"TimedMetadataBehavior": {
"target": "com.amazonaws.medialive#M3u8TimedMetadataBehavior",
"traits": {
- "smithy.api#documentation": "When set to passthrough, timed metadata is passed through from input to output.",
+ "smithy.api#documentation": "Set to PASSTHROUGH to enable ID3 metadata insertion. To include metadata, you configure other parameters in the output group or individual outputs, or you add an ID3 action to the channel schedule.",
"smithy.api#jsonName": "timedMetadataBehavior"
}
},
@@ -25157,6 +25205,20 @@
"smithy.api#documentation": "ID of the channel in MediaPackage that is the destination for this output group. You do not need to specify the individual inputs in MediaPackage; MediaLive will handle the connection of the two MediaLive pipelines to the two MediaPackage inputs. The MediaPackage channel and MediaLive channel must be in the same region.",
"smithy.api#jsonName": "channelId"
}
+ },
+ "ChannelGroup": {
+ "target": "com.amazonaws.medialive#__stringMin1",
+ "traits": {
+ "smithy.api#documentation": "Name of the channel group in MediaPackageV2. Only use if you are sending CMAF Ingest output to a CMAF ingest endpoint on a MediaPackage channel that uses MediaPackage v2.",
+ "smithy.api#jsonName": "channelGroup"
+ }
+ },
+ "ChannelName": {
+ "target": "com.amazonaws.medialive#__stringMin1",
+ "traits": {
+ "smithy.api#documentation": "Name of the channel in MediaPackageV2. Only use if you are sending CMAF Ingest output to a CMAF ingest endpoint on a MediaPackage channel that uses MediaPackage v2.",
+ "smithy.api#jsonName": "channelName"
+ }
}
},
"traits": {
@@ -29362,14 +29424,14 @@
"HlsId3SegmentTaggingSettings": {
"target": "com.amazonaws.medialive#HlsId3SegmentTaggingScheduleActionSettings",
"traits": {
- "smithy.api#documentation": "Action to insert HLS ID3 segment tagging",
+ "smithy.api#documentation": "Action to insert ID3 metadata in every segment, in HLS output groups",
"smithy.api#jsonName": "hlsId3SegmentTaggingSettings"
}
},
"HlsTimedMetadataSettings": {
"target": "com.amazonaws.medialive#HlsTimedMetadataScheduleActionSettings",
"traits": {
- "smithy.api#documentation": "Action to insert HLS metadata",
+ "smithy.api#documentation": "Action to insert ID3 metadata once, in HLS output groups",
"smithy.api#jsonName": "hlsTimedMetadataSettings"
}
},
@@ -37870,6 +37932,16 @@
"smithy.api#documentation": "Placeholder documentation for __string"
}
},
+ "com.amazonaws.medialive#__stringMax100": {
+ "type": "string",
+ "traits": {
+ "smithy.api#documentation": "Placeholder documentation for __stringMax100",
+ "smithy.api#length": {
+ "min": 0,
+ "max": 100
+ }
+ }
+ },
"com.amazonaws.medialive#__stringMax1000": {
"type": "string",
"traits": {
diff --git a/codegen/sdk-codegen/aws-models/mwaa.json b/codegen/sdk-codegen/aws-models/mwaa.json
index 161364d6708..99d50643a87 100644
--- a/codegen/sdk-codegen/aws-models/mwaa.json
+++ b/codegen/sdk-codegen/aws-models/mwaa.json
@@ -1136,7 +1136,7 @@
"AirflowVersion": {
"target": "com.amazonaws.mwaa#AirflowVersion",
"traits": {
- "smithy.api#documentation": "The Apache Airflow version for your environment. If no value is specified, it defaults to the latest version.\n For more information, see Apache Airflow versions on Amazon Managed Workflows for Apache Airflow (Amazon MWAA).
\n Valid values: 1.10.12
, 2.0.2
, 2.2.2
,\n 2.4.3
, 2.5.1
, 2.6.3
, 2.7.2
,\n 2.8.1
, 2.9.2
, and 2.10.1
.
"
+ "smithy.api#documentation": "The Apache Airflow version for your environment. If no value is specified, it defaults to the latest version.\n For more information, see Apache Airflow versions on Amazon Managed Workflows for Apache Airflow (Amazon MWAA).
\n Valid values: 1.10.12
, 2.0.2
, 2.2.2
,\n 2.4.3
, 2.5.1
, 2.6.3
, 2.7.2
,\n 2.8.1
, 2.9.2
, 2.10.1
, and 2.10.3
.
"
}
},
"LoggingConfiguration": {
@@ -1443,7 +1443,7 @@
"AirflowVersion": {
"target": "com.amazonaws.mwaa#AirflowVersion",
"traits": {
- "smithy.api#documentation": "The Apache Airflow version on your environment.
\n Valid values: 1.10.12
, 2.0.2
, 2.2.2
,\n 2.4.3
, 2.5.1
, 2.6.3
, 2.7.2
,\n 2.8.1
, 2.9.2
, and 2.10.1
.
"
+ "smithy.api#documentation": "The Apache Airflow version on your environment.
\n Valid values: 1.10.12
, 2.0.2
, 2.2.2
,\n 2.4.3
, 2.5.1
, 2.6.3
, 2.7.2
,\n 2.8.1
, 2.9.2
, 2.10.1
, and 2.10.3
.
"
}
},
"SourceBucketArn": {
@@ -2989,7 +2989,7 @@
"AirflowVersion": {
"target": "com.amazonaws.mwaa#AirflowVersion",
"traits": {
- "smithy.api#documentation": "The Apache Airflow version for your environment. To upgrade your environment, specify a newer version of Apache Airflow supported by Amazon MWAA.
\n Before you upgrade an environment, make sure your requirements, DAGs, plugins, and other resources used in your workflows are compatible with the new Apache Airflow version. For more information about updating\n your resources, see Upgrading an Amazon MWAA environment.
\n Valid values: 1.10.12
, 2.0.2
, 2.2.2
,\n 2.4.3
, 2.5.1
, 2.6.3
, 2.7.2
,\n 2.8.1
, 2.9.2
, and 2.10.1
.
"
+ "smithy.api#documentation": "The Apache Airflow version for your environment. To upgrade your environment, specify a newer version of Apache Airflow supported by Amazon MWAA.
\n Before you upgrade an environment, make sure your requirements, DAGs, plugins, and other resources used in your workflows are compatible with the new Apache Airflow version. For more information about updating\n your resources, see Upgrading an Amazon MWAA environment.
\n Valid values: 1.10.12
, 2.0.2
, 2.2.2
,\n 2.4.3
, 2.5.1
, 2.6.3
, 2.7.2
,\n 2.8.1
, 2.9.2
, 2.10.1
, and 2.10.3
.
"
}
},
"SourceBucketArn": {
diff --git a/codegen/sdk-codegen/aws-models/networkmanager.json b/codegen/sdk-codegen/aws-models/networkmanager.json
index 8000d04bff4..2019fcd2fbe 100644
--- a/codegen/sdk-codegen/aws-models/networkmanager.json
+++ b/codegen/sdk-codegen/aws-models/networkmanager.json
@@ -12880,7 +12880,7 @@
"EdgeLocations": {
"target": "com.amazonaws.networkmanager#ExternalRegionCodeList",
"traits": {
- "smithy.api#documentation": "One or more edge locations to update for the Direct Connect gateway attachment. The updated array of edge locations overwrites the previous array of locations. EdgeLocations
is only used for Direct Connect gateway attachments. Do
"
+ "smithy.api#documentation": "One or more edge locations to update for the Direct Connect gateway attachment. The updated array of edge locations overwrites the previous array of locations. EdgeLocations
is only used for Direct Connect gateway attachments.
"
}
}
},
diff --git a/codegen/sdk-codegen/aws-models/quicksight.json b/codegen/sdk-codegen/aws-models/quicksight.json
index c949cbe8c3c..8f68cab4baa 100644
--- a/codegen/sdk-codegen/aws-models/quicksight.json
+++ b/codegen/sdk-codegen/aws-models/quicksight.json
@@ -9141,6 +9141,12 @@
"traits": {
"smithy.api#documentation": "When you create the dataset, Amazon QuickSight adds the dataset to these folders.
"
}
+ },
+ "PerformanceConfiguration": {
+ "target": "com.amazonaws.quicksight#PerformanceConfiguration",
+ "traits": {
+ "smithy.api#documentation": "The configuration for the performance optimization of the dataset that contains a UniqueKey
configuration.
"
+ }
}
},
"traits": {
@@ -13418,6 +13424,12 @@
"traits": {
"smithy.api#documentation": "The parameters that are declared in a dataset.
"
}
+ },
+ "PerformanceConfiguration": {
+ "target": "com.amazonaws.quicksight#PerformanceConfiguration",
+ "traits": {
+ "smithy.api#documentation": "The performance optimization configuration of a dataset.
"
+ }
}
},
"traits": {
@@ -40442,6 +40454,20 @@
}
}
},
+ "com.amazonaws.quicksight#PerformanceConfiguration": {
+ "type": "structure",
+ "members": {
+ "UniqueKeys": {
+ "target": "com.amazonaws.quicksight#UniqueKeyList",
+ "traits": {
+ "smithy.api#documentation": "A UniqueKey
configuration.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The configuration for the performance optimization of the dataset that contains a UniqueKey
configuration.
"
+ }
+ },
"com.amazonaws.quicksight#PeriodOverPeriodComputation": {
"type": "structure",
"members": {
@@ -56274,6 +56300,45 @@
"smithy.api#pattern": "^[^\\u0000-\\u00FF]$"
}
},
+ "com.amazonaws.quicksight#UniqueKey": {
+ "type": "structure",
+ "members": {
+ "ColumnNames": {
+ "target": "com.amazonaws.quicksight#UniqueKeyColumnNameList",
+ "traits": {
+ "smithy.api#documentation": "The name of the column that is referenced in the UniqueKey
configuration.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "A UniqueKey
configuration that references a dataset column.
"
+ }
+ },
+ "com.amazonaws.quicksight#UniqueKeyColumnNameList": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.quicksight#ColumnName"
+ },
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 1
+ }
+ }
+ },
+ "com.amazonaws.quicksight#UniqueKeyList": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.quicksight#UniqueKey"
+ },
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 1
+ }
+ }
+ },
"com.amazonaws.quicksight#UniqueValuesComputation": {
"type": "structure",
"members": {
@@ -58187,6 +58252,12 @@
"traits": {
"smithy.api#documentation": "The parameter declarations of the dataset.
"
}
+ },
+ "PerformanceConfiguration": {
+ "target": "com.amazonaws.quicksight#PerformanceConfiguration",
+ "traits": {
+ "smithy.api#documentation": "The configuration for the performance optimization of the dataset that contains a UniqueKey
configuration.
"
+ }
}
},
"traits": {
diff --git a/codegen/sdk-codegen/aws-models/rds.json b/codegen/sdk-codegen/aws-models/rds.json
index 1fde890c40b..7906ba4e51e 100644
--- a/codegen/sdk-codegen/aws-models/rds.json
+++ b/codegen/sdk-codegen/aws-models/rds.json
@@ -2874,6 +2874,12 @@
"smithy.api#enumValue": "MYSQL_NATIVE_PASSWORD"
}
},
+ "MYSQL_CACHING_SHA2_PASSWORD": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "MYSQL_CACHING_SHA2_PASSWORD"
+ }
+ },
"POSTGRES_SCRAM_SHA_256": {
"target": "smithy.api#Unit",
"traits": {
diff --git a/codegen/sdk-codegen/aws-models/resiliencehub.json b/codegen/sdk-codegen/aws-models/resiliencehub.json
index e2fc7be8969..18f132ed97a 100644
--- a/codegen/sdk-codegen/aws-models/resiliencehub.json
+++ b/codegen/sdk-codegen/aws-models/resiliencehub.json
@@ -225,6 +225,26 @@
}
}
},
+ "com.amazonaws.resiliencehub#Alarm": {
+ "type": "structure",
+ "members": {
+ "alarmArn": {
+ "target": "com.amazonaws.resiliencehub#Arn",
+ "traits": {
+ "smithy.api#documentation": "Amazon Resource Name (ARN) of the Amazon CloudWatch alarm.
"
+ }
+ },
+ "source": {
+ "target": "com.amazonaws.resiliencehub#String255",
+ "traits": {
+ "smithy.api#documentation": "Indicates the source of the Amazon CloudWatch alarm. That is, it indicates if the\n alarm was created using Resilience Hub recommendation (AwsResilienceHub
),\n or if you had created the alarm in Amazon CloudWatch (Customer
).
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Indicates the Amazon CloudWatch alarm detected while running an assessment.
"
+ }
+ },
"com.amazonaws.resiliencehub#AlarmRecommendation": {
"type": "structure",
"members": {
@@ -683,7 +703,7 @@
"complianceStatus": {
"target": "com.amazonaws.resiliencehub#ComplianceStatus",
"traits": {
- "smithy.api#documentation": "Current\n status of compliance for the resiliency policy.
"
+ "smithy.api#documentation": "Current status of compliance for the resiliency policy.
"
}
},
"cost": {
@@ -1146,7 +1166,7 @@
"appComponents": {
"target": "com.amazonaws.resiliencehub#AppComponentNameList",
"traits": {
- "smithy.api#documentation": "Indicates the Application Components (AppComponents) that were assessed as part of the\n assessnent and are associated with the identified risk and recommendation.
\n \n This property is available only in the US East (N. Virginia) Region.
\n "
+ "smithy.api#documentation": "Indicates the Application Components (AppComponents) that were assessed as part of the\n assessment and are associated with the identified risk and recommendation.
\n \n This property is available only in the US East (N. Virginia) Region.
\n "
}
}
},
@@ -2465,6 +2485,12 @@
"smithy.api#required": {}
}
},
+ "appComponentId": {
+ "target": "com.amazonaws.resiliencehub#EntityName255",
+ "traits": {
+ "smithy.api#documentation": "Indicates the identifier of an AppComponent.
"
+ }
+ },
"excludeReason": {
"target": "com.amazonaws.resiliencehub#ExcludeRecommendationReason",
"traits": {
@@ -2549,7 +2575,7 @@
"diffType": {
"target": "com.amazonaws.resiliencehub#DifferenceType",
"traits": {
- "smithy.api#documentation": "Difference type between actual and expected recovery point objective (RPO) and recovery\n time objective (RTO) values. Currently, Resilience Hub supports only\n NotEqual
difference type.
"
+ "smithy.api#documentation": "Difference type between actual and expected recovery point objective (RPO) and recovery\n time objective (RTO) values. Currently, Resilience Hub supports only\n NotEqual
difference type.
"
}
}
},
@@ -5422,6 +5448,26 @@
}
}
},
+ "com.amazonaws.resiliencehub#Experiment": {
+ "type": "structure",
+ "members": {
+ "experimentArn": {
+ "target": "com.amazonaws.resiliencehub#String255",
+ "traits": {
+ "smithy.api#documentation": "Amazon Resource Name (ARN) of the FIS experiment.
"
+ }
+ },
+ "experimentTemplateId": {
+ "target": "com.amazonaws.resiliencehub#String255",
+ "traits": {
+ "smithy.api#documentation": "Identifier of the FIS experiment template.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "Indicates the FIS experiment detected while running an assessment.
"
+ }
+ },
"com.amazonaws.resiliencehub#FailedGroupingRecommendationEntries": {
"type": "list",
"member": {
@@ -7987,7 +8033,7 @@
"invokerRoleName": {
"target": "com.amazonaws.resiliencehub#IamRoleName",
"traits": {
- "smithy.api#documentation": "Existing Amazon Web Services\n IAM role name in the primary Amazon Web Services account that will be assumed by\n Resilience Hub Service Principle to obtain a read-only access to your application\n resources while running an assessment.
\n \n \n - \n
You must have iam:passRole
permission for this role while creating or\n updating the application.
\n \n - \n
Currently, invokerRoleName
accepts only [A-Za-z0-9_+=,.@-]
\n characters.
\n \n
\n "
+ "smithy.api#documentation": "Existing Amazon Web Services\n IAM role name in the primary Amazon Web Services account that will be assumed by\n Resilience Hub Service Principle to obtain a read-only access to your application\n resources while running an assessment.
\n If your IAM role includes a path, you must include the path in the invokerRoleName
parameter. \n For example, if your IAM role's ARN is arn:aws:iam:123456789012:role/my-path/role-name
, you should pass my-path/role-name
.\n
\n \n \n - \n
You must have iam:passRole
permission for this role while creating or\n updating the application.
\n \n - \n
Currently, invokerRoleName
accepts only [A-Za-z0-9_+=,.@-]
\n characters.
\n \n
\n "
}
},
"crossAccountRoleArns": {
@@ -8427,6 +8473,18 @@
"traits": {
"smithy.api#documentation": "Indicates the reason for excluding an operational recommendation.
"
}
+ },
+ "latestDiscoveredExperiment": {
+ "target": "com.amazonaws.resiliencehub#Experiment",
+ "traits": {
+ "smithy.api#documentation": "Indicates the experiment created in FIS that was discovered by Resilience Hub, which matches the recommendation.
"
+ }
+ },
+ "discoveredAlarm": {
+ "target": "com.amazonaws.resiliencehub#Alarm",
+ "traits": {
+ "smithy.api#documentation": "Indicates the previously implemented Amazon CloudWatch alarm discovered by Resilience Hub.
"
+ }
}
},
"traits": {
@@ -9205,7 +9263,7 @@
"hasMoreErrors": {
"target": "com.amazonaws.resiliencehub#BooleanOptional",
"traits": {
- "smithy.api#documentation": " This indicates if there are more errors not listed in the\n resourceErrors
\n list.
"
+ "smithy.api#documentation": " This indicates if there are more errors not listed in the resourceErrors
\n list.
"
}
}
},
@@ -10204,6 +10262,12 @@
"smithy.api#required": {}
}
},
+ "appComponentId": {
+ "target": "com.amazonaws.resiliencehub#EntityName255",
+ "traits": {
+ "smithy.api#documentation": "Indicates the identifier of the AppComponent.
"
+ }
+ },
"appComponentName": {
"target": "com.amazonaws.resiliencehub#EntityId",
"traits": {
@@ -10924,6 +10988,12 @@
"smithy.api#required": {}
}
},
+ "appComponentId": {
+ "target": "com.amazonaws.resiliencehub#EntityName255",
+ "traits": {
+ "smithy.api#documentation": "Indicates the identifier of the AppComponent.
"
+ }
+ },
"excludeReason": {
"target": "com.amazonaws.resiliencehub#ExcludeRecommendationReason",
"traits": {
diff --git a/codegen/sdk-codegen/aws-models/route-53-domains.json b/codegen/sdk-codegen/aws-models/route-53-domains.json
index e78189dd068..01bf154947e 100644
--- a/codegen/sdk-codegen/aws-models/route-53-domains.json
+++ b/codegen/sdk-codegen/aws-models/route-53-domains.json
@@ -3638,10 +3638,7 @@
"com.amazonaws.route53domains#LangCode": {
"type": "string",
"traits": {
- "smithy.api#length": {
- "min": 0,
- "max": 3
- }
+ "smithy.api#pattern": "^|[A-Za-z]{2,3}$"
}
},
"com.amazonaws.route53domains#ListDomains": {
@@ -4280,6 +4277,12 @@
"traits": {
"smithy.api#enumValue": "TRANSFER_ON_RENEW"
}
+ },
+ "RESTORE_DOMAIN": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "RESTORE_DOMAIN"
+ }
}
}
},
@@ -4291,7 +4294,7 @@
"traits": {
"smithy.api#length": {
"min": 0,
- "max": 20
+ "max": 21
}
}
},
@@ -4344,7 +4347,10 @@
"com.amazonaws.route53domains#Price": {
"type": "double",
"traits": {
- "smithy.api#default": 0
+ "smithy.api#default": 0,
+ "smithy.api#range": {
+ "min": 0.0
+ }
}
},
"com.amazonaws.route53domains#PriceWithCurrency": {
diff --git a/codegen/sdk-codegen/aws-models/servicediscovery.json b/codegen/sdk-codegen/aws-models/servicediscovery.json
index 483b9e06716..6e5f392f14e 100644
--- a/codegen/sdk-codegen/aws-models/servicediscovery.json
+++ b/codegen/sdk-codegen/aws-models/servicediscovery.json
@@ -636,7 +636,7 @@
}
],
"traits": {
- "smithy.api#documentation": "Deletes a specified service. If the service still contains one or more registered instances, the request\n fails.
",
+ "smithy.api#documentation": "Deletes a specified service and all associated service attributes. If the service still contains one or more registered instances, the request\n fails.
",
"smithy.api#examples": [
{
"title": "Example: Delete service",
@@ -649,6 +649,68 @@
]
}
},
+ "com.amazonaws.servicediscovery#DeleteServiceAttributes": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.servicediscovery#DeleteServiceAttributesRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.servicediscovery#DeleteServiceAttributesResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.servicediscovery#InvalidInput"
+ },
+ {
+ "target": "com.amazonaws.servicediscovery#ServiceNotFound"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Deletes specific attributes associated with a service.
",
+ "smithy.api#examples": [
+ {
+ "title": "DeleteServiceAttributes example",
+ "documentation": "Example: Delete service attribute by providing attribute key and service ID",
+ "input": {
+ "Attributes": [
+ "port"
+ ],
+ "ServiceId": "srv-e4anhexample0004"
+ },
+ "output": {}
+ }
+ ]
+ }
+ },
+ "com.amazonaws.servicediscovery#DeleteServiceAttributesRequest": {
+ "type": "structure",
+ "members": {
+ "ServiceId": {
+ "target": "com.amazonaws.servicediscovery#ResourceId",
+ "traits": {
+ "smithy.api#documentation": "The ID of the service from which the attributes will be deleted.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Attributes": {
+ "target": "com.amazonaws.servicediscovery#ServiceAttributeKeyList",
+ "traits": {
+ "smithy.api#documentation": "A list of keys corresponding to each attribute that you want to delete.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.servicediscovery#DeleteServiceAttributesResponse": {
+ "type": "structure",
+ "members": {},
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.servicediscovery#DeleteServiceRequest": {
"type": "structure",
"members": {
@@ -978,13 +1040,13 @@
"DnsRecords": {
"target": "com.amazonaws.servicediscovery#DnsRecordList",
"traits": {
- "smithy.api#documentation": "An array that contains one DnsRecord
object for each Route 53 DNS record that you want Cloud Map\n to create when you register an instance.
",
+ "smithy.api#documentation": "An array that contains one DnsRecord
object for each Route 53 DNS record that you want Cloud Map\n to create when you register an instance.
\n \n The record type of a service specified in a DnsRecord
object can't be updated. To change a record type, you need to delete the service and recreate it with a new\n DnsConfig
.
\n ",
"smithy.api#required": {}
}
}
},
"traits": {
- "smithy.api#documentation": "A complex type that contains information about the Amazon Route 53 DNS records that you want Cloud Map to create\n when you register an instance.
\n \n The record types of a service can only be changed by deleting the service and recreating it with a new\n Dnsconfig
.
\n "
+ "smithy.api#documentation": "A complex type that contains information about the Amazon Route 53 DNS records that you want Cloud Map to create\n when you register an instance.
"
}
},
"com.amazonaws.servicediscovery#DnsConfigChange": {
@@ -1444,6 +1506,72 @@
"smithy.api#documentation": "Gets the settings for a specified service.
"
}
},
+ "com.amazonaws.servicediscovery#GetServiceAttributes": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.servicediscovery#GetServiceAttributesRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.servicediscovery#GetServiceAttributesResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.servicediscovery#InvalidInput"
+ },
+ {
+ "target": "com.amazonaws.servicediscovery#ServiceNotFound"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Returns the attributes associated with a specified service.
",
+ "smithy.api#examples": [
+ {
+ "title": "GetServiceAttributes Example",
+ "documentation": "This example gets the attributes for a specified service.",
+ "input": {
+ "ServiceId": "srv-e4anhexample0004"
+ },
+ "output": {
+ "ServiceAttributes": {
+ "Attributes": {
+ "port": "80"
+ },
+ "ServiceArn": "arn:aws:servicediscovery:us-west-2:123456789012:service/srv-e4anhexample0004"
+ }
+ }
+ }
+ ]
+ }
+ },
+ "com.amazonaws.servicediscovery#GetServiceAttributesRequest": {
+ "type": "structure",
+ "members": {
+ "ServiceId": {
+ "target": "com.amazonaws.servicediscovery#ResourceId",
+ "traits": {
+ "smithy.api#documentation": "The ID of the service that you want to get attributes for.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.servicediscovery#GetServiceAttributesResponse": {
+ "type": "structure",
+ "members": {
+ "ServiceAttributes": {
+ "target": "com.amazonaws.servicediscovery#ServiceAttributes",
+ "traits": {
+ "smithy.api#documentation": "A complex type that contains the service ARN and a list of attribute key-value pairs associated with the service.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.servicediscovery#GetServiceRequest": {
"type": "structure",
"members": {
@@ -2424,7 +2552,8 @@
"smithy.api#length": {
"min": 0,
"max": 1024
- }
+ },
+ "smithy.api#pattern": "^[!-~]{1,1024}$"
}
},
"com.amazonaws.servicediscovery#NamespaceNameHttp": {
@@ -3267,6 +3396,9 @@
{
"target": "com.amazonaws.servicediscovery#DeleteService"
},
+ {
+ "target": "com.amazonaws.servicediscovery#DeleteServiceAttributes"
+ },
{
"target": "com.amazonaws.servicediscovery#DeregisterInstance"
},
@@ -3291,6 +3423,9 @@
{
"target": "com.amazonaws.servicediscovery#GetService"
},
+ {
+ "target": "com.amazonaws.servicediscovery#GetServiceAttributes"
+ },
{
"target": "com.amazonaws.servicediscovery#ListInstances"
},
@@ -3329,6 +3464,9 @@
},
{
"target": "com.amazonaws.servicediscovery#UpdateService"
+ },
+ {
+ "target": "com.amazonaws.servicediscovery#UpdateServiceAttributes"
}
],
"traits": {
@@ -4540,6 +4678,84 @@
"smithy.api#httpError": 400
}
},
+ "com.amazonaws.servicediscovery#ServiceAttributeKey": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 255
+ }
+ }
+ },
+ "com.amazonaws.servicediscovery#ServiceAttributeKeyList": {
+ "type": "list",
+ "member": {
+ "target": "com.amazonaws.servicediscovery#ServiceAttributeKey"
+ },
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 30
+ }
+ }
+ },
+ "com.amazonaws.servicediscovery#ServiceAttributeValue": {
+ "type": "string",
+ "traits": {
+ "smithy.api#length": {
+ "min": 0,
+ "max": 1024
+ }
+ }
+ },
+ "com.amazonaws.servicediscovery#ServiceAttributes": {
+ "type": "structure",
+ "members": {
+ "ServiceArn": {
+ "target": "com.amazonaws.servicediscovery#Arn",
+ "traits": {
+ "smithy.api#documentation": "The ARN of the service that the attributes are associated with.
"
+ }
+ },
+ "Attributes": {
+ "target": "com.amazonaws.servicediscovery#ServiceAttributesMap",
+ "traits": {
+ "smithy.api#documentation": "A string map that contains the following information for the service that you specify in\n ServiceArn
:
\n \n You can specify a total of 30 attributes.
"
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "A complex type that contains information about attributes associated with a specific service.
"
+ }
+ },
+ "com.amazonaws.servicediscovery#ServiceAttributesLimitExceededException": {
+ "type": "structure",
+ "members": {
+ "Message": {
+ "target": "com.amazonaws.servicediscovery#ErrorMessage"
+ }
+ },
+ "traits": {
+ "smithy.api#documentation": "The attribute can't be added to the service because you've exceeded the quota for the number of attributes you can add to a service.
",
+ "smithy.api#error": "client",
+ "smithy.api#httpError": 400
+ }
+ },
+ "com.amazonaws.servicediscovery#ServiceAttributesMap": {
+ "type": "map",
+ "key": {
+ "target": "com.amazonaws.servicediscovery#ServiceAttributeKey"
+ },
+ "value": {
+ "target": "com.amazonaws.servicediscovery#ServiceAttributeValue"
+ },
+ "traits": {
+ "smithy.api#length": {
+ "min": 1,
+ "max": 30
+ }
+ }
+ },
"com.amazonaws.servicediscovery#ServiceChange": {
"type": "structure",
"members": {
@@ -5330,6 +5546,71 @@
]
}
},
+ "com.amazonaws.servicediscovery#UpdateServiceAttributes": {
+ "type": "operation",
+ "input": {
+ "target": "com.amazonaws.servicediscovery#UpdateServiceAttributesRequest"
+ },
+ "output": {
+ "target": "com.amazonaws.servicediscovery#UpdateServiceAttributesResponse"
+ },
+ "errors": [
+ {
+ "target": "com.amazonaws.servicediscovery#InvalidInput"
+ },
+ {
+ "target": "com.amazonaws.servicediscovery#ServiceAttributesLimitExceededException"
+ },
+ {
+ "target": "com.amazonaws.servicediscovery#ServiceNotFound"
+ }
+ ],
+ "traits": {
+ "smithy.api#documentation": "Submits a request to update a specified service to add service-level attributes.
",
+ "smithy.api#examples": [
+ {
+ "title": "UpdateServiceAttributes Example",
+ "documentation": "This example submits a request to update the specified service to add a port attribute with the value 80.",
+ "input": {
+ "ServiceId": "srv-e4anhexample0004",
+ "Attributes": {
+ "port": "80"
+ }
+ },
+ "output": {}
+ }
+ ]
+ }
+ },
+ "com.amazonaws.servicediscovery#UpdateServiceAttributesRequest": {
+ "type": "structure",
+ "members": {
+ "ServiceId": {
+ "target": "com.amazonaws.servicediscovery#ResourceId",
+ "traits": {
+ "smithy.api#documentation": "The ID of the service that you want to update.
",
+ "smithy.api#required": {}
+ }
+ },
+ "Attributes": {
+ "target": "com.amazonaws.servicediscovery#ServiceAttributesMap",
+ "traits": {
+ "smithy.api#documentation": "A string map that contains attribute key-value pairs.
",
+ "smithy.api#required": {}
+ }
+ }
+ },
+ "traits": {
+ "smithy.api#input": {}
+ }
+ },
+ "com.amazonaws.servicediscovery#UpdateServiceAttributesResponse": {
+ "type": "structure",
+ "members": {},
+ "traits": {
+ "smithy.api#output": {}
+ }
+ },
"com.amazonaws.servicediscovery#UpdateServiceRequest": {
"type": "structure",
"members": {
@@ -5343,7 +5624,7 @@
"Service": {
"target": "com.amazonaws.servicediscovery#ServiceChange",
"traits": {
- "smithy.api#documentation": "A complex type that contains the new settings for the service.
",
+ "smithy.api#documentation": "A complex type that contains the new settings for the service. You can specify a maximum of 30 attributes (key-value pairs).
",
"smithy.api#required": {}
}
}
diff --git a/codegen/sdk-codegen/aws-models/synthetics.json b/codegen/sdk-codegen/aws-models/synthetics.json
index 571fa4c4a0e..940783ca850 100644
--- a/codegen/sdk-codegen/aws-models/synthetics.json
+++ b/codegen/sdk-codegen/aws-models/synthetics.json
@@ -314,7 +314,7 @@
"min": 1,
"max": 2048
},
- "smithy.api#pattern": "^arn:(aws[a-zA-Z-]*)?:synthetics:[a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\\d{1}:\\d{12}:canary:[0-9a-z_\\-]{1,255}$"
+ "smithy.api#pattern": "^arn:(aws[a-zA-Z-]*)?:synthetics:[a-z]{2,4}(-[a-z]{2,4})?-[a-z]+-\\d{1}:\\d{12}:canary:[0-9a-z_\\-]{1,255}$"
}
},
"com.amazonaws.synthetics#CanaryCodeInput": {
@@ -1526,7 +1526,7 @@
"min": 1,
"max": 2048
},
- "smithy.api#pattern": "^arn:(aws[a-zA-Z-]*)?:lambda:[a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\\d{1}:\\d{12}:function:[a-zA-Z0-9-_]+(:(\\$LATEST|[a-zA-Z0-9-_]+))?$"
+ "smithy.api#pattern": "^arn:(aws[a-zA-Z-]*)?:lambda:[a-z]{2,4}(-[a-z]{2,4})?-[a-z]+-\\d{1}:\\d{12}:function:[a-zA-Z0-9-_]+(:(\\$LATEST|[a-zA-Z0-9-_]+))?$"
}
},
"com.amazonaws.synthetics#GetCanary": {
@@ -1777,7 +1777,7 @@
"min": 1,
"max": 128
},
- "smithy.api#pattern": "^arn:(aws[a-zA-Z-]*)?:synthetics:[a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\\d{1}:\\d{12}:group:[0-9a-z]+$"
+ "smithy.api#pattern": "^arn:(aws[a-zA-Z-]*)?:synthetics:[a-z]{2,4}(-[a-z]{2,4})?-[a-z]+-\\d{1}:\\d{12}:group:[0-9a-z]+$"
}
},
"com.amazonaws.synthetics#GroupIdentifier": {
@@ -1863,7 +1863,7 @@
"min": 1,
"max": 2048
},
- "smithy.api#pattern": "^arn:(aws[a-zA-Z-]*)?:kms:[a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\\d{1}:\\d{12}:key/[\\w\\-\\/]+$"
+ "smithy.api#pattern": "^arn:(aws[a-zA-Z-]*)?:kms:[a-z]{2,4}(-[a-z]{2,4})?-[a-z]+-\\d{1}:\\d{12}:key/[\\w\\-\\/]+$"
}
},
"com.amazonaws.synthetics#ListAssociatedGroups": {
@@ -2291,7 +2291,7 @@
"min": 1,
"max": 2048
},
- "smithy.api#pattern": "^arn:(aws[a-zA-Z-]*)?:synthetics:[a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\\d{1}:\\d{12}:(canary|group):[0-9a-z_\\-]+$"
+ "smithy.api#pattern": "^arn:(aws[a-zA-Z-]*)?:synthetics:[a-z]{2,4}(-[a-z]{2,4})?-[a-z]+-\\d{1}:\\d{12}:(canary|group):[0-9a-z_\\-]+$"
}
},
"com.amazonaws.synthetics#ResourceList": {
@@ -3987,7 +3987,7 @@
"BaseCanaryRunId": {
"target": "com.amazonaws.synthetics#String",
"traits": {
- "smithy.api#documentation": "Specifies which canary run to use the screenshots from as the baseline for future visual monitoring with this canary. Valid values are \n nextrun
to use the screenshots from the next run after this update is made, lastrun
to use the screenshots from the most recent run \n before this update was made, or the value of Id
in the \n CanaryRun from any past run of this canary.
",
+ "smithy.api#documentation": "Specifies which canary run to use the screenshots from as the baseline for future visual monitoring with this canary. Valid values are \n nextrun
to use the screenshots from the next run after this update is made, lastrun
to use the screenshots from the most recent run \n before this update was made, or the value of Id
in the \n CanaryRun from a run of this a canary in the past 31 days. If you specify the Id
of a canary run older than 31 days, \n the operation returns a 400 validation exception error..
",
"smithy.api#required": {}
}
}
@@ -4030,6 +4030,12 @@
"traits": {
"smithy.api#documentation": "The IDs of the security groups for this canary.
"
}
+ },
+ "Ipv6AllowedForDualStack": {
+ "target": "com.amazonaws.synthetics#NullableBoolean",
+ "traits": {
+ "smithy.api#documentation": "Set this to true
to allow outbound IPv6 traffic on VPC canaries that are connected to dual-stack subnets. The default is false
\n
"
+ }
}
},
"traits": {
@@ -4056,6 +4062,12 @@
"traits": {
"smithy.api#documentation": "The IDs of the security groups for this canary.
"
}
+ },
+ "Ipv6AllowedForDualStack": {
+ "target": "com.amazonaws.synthetics#NullableBoolean",
+ "traits": {
+ "smithy.api#documentation": "Indicates whether this canary allows outbound IPv6 traffic if it is connected to dual-stack subnets.
"
+ }
}
},
"traits": {
diff --git a/codegen/sdk-codegen/aws-models/transfer.json b/codegen/sdk-codegen/aws-models/transfer.json
index fec686b645b..9ed633734e3 100644
--- a/codegen/sdk-codegen/aws-models/transfer.json
+++ b/codegen/sdk-codegen/aws-models/transfer.json
@@ -180,6 +180,12 @@
"traits": {
"smithy.api#documentation": "Provides Basic authentication support to the AS2 Connectors API. To use Basic authentication,\n you must provide the name or Amazon Resource Name (ARN) of a secret in Secrets Manager.
\n The default value for this parameter is null
, which indicates that Basic authentication is not enabled for the connector.
\n If the connector should use Basic authentication, the secret needs to be in the following format:
\n \n {\n \"Username\": \"user-name\",\n \"Password\": \"user-password\"\n }
\n
\n Replace user-name
and user-password
with the credentials for the actual user that is being authenticated.
\n Note the following:
\n \n - \n
You are storing these credentials in Secrets Manager, not passing them directly into this API.
\n \n - \n
If you are using the API, SDKs, or CloudFormation to configure your connector, then you must create the secret before you can enable Basic authentication.\n However, if you are using the Amazon Web Services management console, you can have the system create the secret for you.
\n \n
\n If you have previously enabled Basic authentication for a connector, you can disable it by using the UpdateConnector
API call. For example, if you are using the CLI, you can run the following command to remove Basic authentication:
\n \n update-connector --connector-id my-connector-id --as2-config 'BasicAuthSecretId=\"\"'
\n
"
}
+ },
+ "PreserveContentType": {
+ "target": "com.amazonaws.transfer#PreserveContentType",
+ "traits": {
+ "smithy.api#documentation": "Allows you to use the Amazon S3 Content-Type
that is associated with objects in S3 instead of\n having the content type mapped based on the file extension. This parameter is enabled by default when you create an AS2 connector\n from the console, but disabled by default when you create an AS2 connector by calling the API directly.
"
+ }
}
},
"traits": {
@@ -766,6 +772,18 @@
"traits": {
"smithy.api#documentation": "Key-value pairs that can be used to group and search for agreements.
"
}
+ },
+ "PreserveFilename": {
+ "target": "com.amazonaws.transfer#PreserveFilenameType",
+ "traits": {
+ "smithy.api#documentation": "\n Determines whether or not Transfer Family appends a unique string of characters to the end of the AS2 message payload\n filename when saving it.\n
\n \n - \n
\n ENABLED
: the filename provided by your trading parter is preserved when the file is saved.
\n \n - \n
\n DISABLED
(default value): when Transfer Family saves the file, the filename is adjusted, as\n described in File names and locations.
\n \n
"
+ }
+ },
+ "EnforceMessageSigning": {
+ "target": "com.amazonaws.transfer#EnforceMessageSigningType",
+ "traits": {
+ "smithy.api#documentation": "\n Determines whether or not unsigned messages from your trading partners will be accepted.\n
\n "
+ }
}
},
"traits": {
@@ -3218,6 +3236,18 @@
"traits": {
"smithy.api#documentation": "Key-value pairs that can be used to group and search for agreements.
"
}
+ },
+ "PreserveFilename": {
+ "target": "com.amazonaws.transfer#PreserveFilenameType",
+ "traits": {
+ "smithy.api#documentation": "\n Determines whether or not Transfer Family appends a unique string of characters to the end of the AS2 message payload\n filename when saving it.\n
\n \n - \n
\n ENABLED
: the filename provided by your trading parter is preserved when the file is saved.
\n \n - \n
\n DISABLED
(default value): when Transfer Family saves the file, the filename is adjusted, as\n described in File names and locations.
\n \n
"
+ }
+ },
+ "EnforceMessageSigning": {
+ "target": "com.amazonaws.transfer#EnforceMessageSigningType",
+ "traits": {
+ "smithy.api#documentation": "\n Determines whether or not unsigned messages from your trading partners will be accepted.\n
\n "
+ }
}
},
"traits": {
@@ -3250,7 +3280,7 @@
"Status": {
"target": "com.amazonaws.transfer#CertificateStatusType",
"traits": {
- "smithy.api#documentation": "The certificate can be either ACTIVE
, PENDING_ROTATION
, or\n INACTIVE
. PENDING_ROTATION
means that this certificate will\n replace the current certificate when it expires.
"
+ "smithy.api#documentation": "Currently, the only available status is ACTIVE
: all other values are reserved for future use.
"
}
},
"Certificate": {
@@ -4227,6 +4257,23 @@
}
}
},
+ "com.amazonaws.transfer#EnforceMessageSigningType": {
+ "type": "enum",
+ "members": {
+ "ENABLED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "ENABLED"
+ }
+ },
+ "DISABLED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "DISABLED"
+ }
+ }
+ }
+ },
"com.amazonaws.transfer#ExecutionError": {
"type": "structure",
"members": {
@@ -7017,6 +7064,40 @@
"smithy.api#pattern": "^[\\x09-\\x0D\\x20-\\x7E]*$"
}
},
+ "com.amazonaws.transfer#PreserveContentType": {
+ "type": "enum",
+ "members": {
+ "ENABLED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "ENABLED"
+ }
+ },
+ "DISABLED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "DISABLED"
+ }
+ }
+ }
+ },
+ "com.amazonaws.transfer#PreserveFilenameType": {
+ "type": "enum",
+ "members": {
+ "ENABLED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "ENABLED"
+ }
+ },
+ "DISABLED": {
+ "target": "smithy.api#Unit",
+ "traits": {
+ "smithy.api#enumValue": "DISABLED"
+ }
+ }
+ }
+ },
"com.amazonaws.transfer#PrivateKeyType": {
"type": "string",
"traits": {
@@ -9988,6 +10069,18 @@
"traits": {
"smithy.api#documentation": "Connectors are used to send files using either the AS2 or SFTP protocol. For the access role,\n provide the Amazon Resource Name (ARN) of the Identity and Access Management role to use.
\n \n For AS2 connectors\n
\n With AS2, you can send files by calling StartFileTransfer
and specifying the\n file paths in the request parameter, SendFilePaths
. We use the file’s parent\n directory (for example, for --send-file-paths /bucket/dir/file.txt
, parent\n directory is /bucket/dir/
) to temporarily store a processed AS2 message file,\n store the MDN when we receive them from the partner, and write a final JSON file containing\n relevant metadata of the transmission. So, the AccessRole
needs to provide read\n and write access to the parent directory of the file location used in the\n StartFileTransfer
request. Additionally, you need to provide read and write\n access to the parent directory of the files that you intend to send with\n StartFileTransfer
.
\n If you are using Basic authentication for your AS2 connector, the access role requires the\n secretsmanager:GetSecretValue
permission for the secret. If the secret is encrypted using\n a customer-managed key instead of the Amazon Web Services managed key in Secrets Manager, then the role also\n needs the kms:Decrypt
permission for that key.
\n \n For SFTP connectors\n
\n Make sure that the access role provides\n read and write access to the parent directory of the file location\n that's used in the StartFileTransfer
request.\n Additionally, make sure that the role provides\n secretsmanager:GetSecretValue
permission to Secrets Manager.
"
}
+ },
+ "PreserveFilename": {
+ "target": "com.amazonaws.transfer#PreserveFilenameType",
+ "traits": {
+ "smithy.api#documentation": "\n Determines whether or not Transfer Family appends a unique string of characters to the end of the AS2 message payload\n filename when saving it.\n
\n \n - \n
\n ENABLED
: the filename provided by your trading parter is preserved when the file is saved.
\n \n - \n
\n DISABLED
(default value): when Transfer Family saves the file, the filename is adjusted, as\n described in File names and locations.
\n \n
"
+ }
+ },
+ "EnforceMessageSigning": {
+ "target": "com.amazonaws.transfer#EnforceMessageSigningType",
+ "traits": {
+ "smithy.api#documentation": "\n Determines whether or not unsigned messages from your trading partners will be accepted.\n
\n "
+ }
}
},
"traits": {
diff --git a/codegen/smithy-aws-go-codegen/src/main/resources/software/amazon/smithy/aws/go/codegen/endpoints.json b/codegen/smithy-aws-go-codegen/src/main/resources/software/amazon/smithy/aws/go/codegen/endpoints.json
index 93ffb541d3c..7ea2f774a16 100644
--- a/codegen/smithy-aws-go-codegen/src/main/resources/software/amazon/smithy/aws/go/codegen/endpoints.json
+++ b/codegen/smithy-aws-go-codegen/src/main/resources/software/amazon/smithy/aws/go/codegen/endpoints.json
@@ -2229,6 +2229,12 @@
"tags" : [ "dualstack" ]
} ]
},
+ "ap-southeast-5" : {
+ "variants" : [ {
+ "hostname" : "athena.ap-southeast-5.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
"ca-central-1" : {
"variants" : [ {
"hostname" : "athena-fips.ca-central-1.amazonaws.com",
@@ -5774,38 +5780,150 @@
},
"datasync" : {
"endpoints" : {
- "af-south-1" : { },
- "ap-east-1" : { },
- "ap-northeast-1" : { },
- "ap-northeast-2" : { },
- "ap-northeast-3" : { },
- "ap-south-1" : { },
- "ap-south-2" : { },
- "ap-southeast-1" : { },
- "ap-southeast-2" : { },
- "ap-southeast-3" : { },
- "ap-southeast-4" : { },
- "ap-southeast-5" : { },
+ "af-south-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.af-south-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-east-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.ap-east-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-northeast-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.ap-northeast-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-northeast-2" : {
+ "variants" : [ {
+ "hostname" : "datasync.ap-northeast-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-northeast-3" : {
+ "variants" : [ {
+ "hostname" : "datasync.ap-northeast-3.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-south-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.ap-south-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-south-2" : {
+ "variants" : [ {
+ "hostname" : "datasync.ap-south-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-southeast-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.ap-southeast-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-southeast-2" : {
+ "variants" : [ {
+ "hostname" : "datasync.ap-southeast-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-southeast-3" : {
+ "variants" : [ {
+ "hostname" : "datasync.ap-southeast-3.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-southeast-4" : {
+ "variants" : [ {
+ "hostname" : "datasync.ap-southeast-4.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-southeast-5" : {
+ "variants" : [ {
+ "hostname" : "datasync.ap-southeast-5.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
"ca-central-1" : {
"variants" : [ {
"hostname" : "datasync-fips.ca-central-1.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "datasync-fips.ca-central-1.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "datasync.ca-central-1.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
"ca-west-1" : {
"variants" : [ {
"hostname" : "datasync-fips.ca-west-1.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "datasync-fips.ca-west-1.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "datasync.ca-west-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "eu-central-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.eu-central-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "eu-central-2" : {
+ "variants" : [ {
+ "hostname" : "datasync.eu-central-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "eu-north-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.eu-north-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "eu-south-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.eu-south-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "eu-south-2" : {
+ "variants" : [ {
+ "hostname" : "datasync.eu-south-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "eu-west-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.eu-west-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "eu-west-2" : {
+ "variants" : [ {
+ "hostname" : "datasync.eu-west-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "eu-west-3" : {
+ "variants" : [ {
+ "hostname" : "datasync.eu-west-3.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
- "eu-central-1" : { },
- "eu-central-2" : { },
- "eu-north-1" : { },
- "eu-south-1" : { },
- "eu-south-2" : { },
- "eu-west-1" : { },
- "eu-west-2" : { },
- "eu-west-3" : { },
"fips-ca-central-1" : {
"credentialScope" : {
"region" : "ca-central-1"
@@ -5848,32 +5966,76 @@
"deprecated" : true,
"hostname" : "datasync-fips.us-west-2.amazonaws.com"
},
- "il-central-1" : { },
- "me-central-1" : { },
- "me-south-1" : { },
- "sa-east-1" : { },
+ "il-central-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.il-central-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "me-central-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.me-central-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "me-south-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.me-south-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "sa-east-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.sa-east-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
"us-east-1" : {
"variants" : [ {
"hostname" : "datasync-fips.us-east-1.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "datasync-fips.us-east-1.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "datasync.us-east-1.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
"us-east-2" : {
"variants" : [ {
"hostname" : "datasync-fips.us-east-2.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "datasync-fips.us-east-2.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "datasync.us-east-2.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
"us-west-1" : {
"variants" : [ {
"hostname" : "datasync-fips.us-west-1.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "datasync-fips.us-west-1.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "datasync.us-west-1.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
"us-west-2" : {
"variants" : [ {
"hostname" : "datasync-fips.us-west-2.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "datasync-fips.us-west-2.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "datasync.us-west-2.api.aws",
+ "tags" : [ "dualstack" ]
} ]
}
}
@@ -10698,37 +10860,81 @@
},
"endpoints" : {
"af-south-1" : {
- "hostname" : "internetmonitor.af-south-1.api.aws"
+ "hostname" : "internetmonitor.af-south-1.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.af-south-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"ap-east-1" : {
- "hostname" : "internetmonitor.ap-east-1.api.aws"
+ "hostname" : "internetmonitor.ap-east-1.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.ap-east-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"ap-northeast-1" : {
- "hostname" : "internetmonitor.ap-northeast-1.api.aws"
+ "hostname" : "internetmonitor.ap-northeast-1.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.ap-northeast-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"ap-northeast-2" : {
- "hostname" : "internetmonitor.ap-northeast-2.api.aws"
+ "hostname" : "internetmonitor.ap-northeast-2.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.ap-northeast-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"ap-northeast-3" : {
- "hostname" : "internetmonitor.ap-northeast-3.api.aws"
+ "hostname" : "internetmonitor.ap-northeast-3.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.ap-northeast-3.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"ap-south-1" : {
- "hostname" : "internetmonitor.ap-south-1.api.aws"
+ "hostname" : "internetmonitor.ap-south-1.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.ap-south-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"ap-south-2" : {
- "hostname" : "internetmonitor.ap-south-2.api.aws"
+ "hostname" : "internetmonitor.ap-south-2.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.ap-south-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"ap-southeast-1" : {
- "hostname" : "internetmonitor.ap-southeast-1.api.aws"
+ "hostname" : "internetmonitor.ap-southeast-1.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.ap-southeast-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"ap-southeast-2" : {
- "hostname" : "internetmonitor.ap-southeast-2.api.aws"
+ "hostname" : "internetmonitor.ap-southeast-2.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.ap-southeast-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"ap-southeast-3" : {
- "hostname" : "internetmonitor.ap-southeast-3.api.aws"
+ "hostname" : "internetmonitor.ap-southeast-3.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.ap-southeast-3.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"ap-southeast-4" : {
- "hostname" : "internetmonitor.ap-southeast-4.api.aws"
+ "hostname" : "internetmonitor.ap-southeast-4.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.ap-southeast-4.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"ap-southeast-5" : {
"hostname" : "internetmonitor.ap-southeast-5.api.aws"
@@ -10738,52 +10944,108 @@
"variants" : [ {
"hostname" : "internetmonitor-fips.ca-central-1.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "internetmonitor-fips.ca-central-1.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "internetmonitor.ca-central-1.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
"ca-west-1" : {
"hostname" : "internetmonitor.ca-west-1.api.aws"
},
"eu-central-1" : {
- "hostname" : "internetmonitor.eu-central-1.api.aws"
+ "hostname" : "internetmonitor.eu-central-1.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.eu-central-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"eu-central-2" : {
- "hostname" : "internetmonitor.eu-central-2.api.aws"
+ "hostname" : "internetmonitor.eu-central-2.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.eu-central-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"eu-north-1" : {
- "hostname" : "internetmonitor.eu-north-1.api.aws"
+ "hostname" : "internetmonitor.eu-north-1.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.eu-north-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"eu-south-1" : {
- "hostname" : "internetmonitor.eu-south-1.api.aws"
+ "hostname" : "internetmonitor.eu-south-1.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.eu-south-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"eu-south-2" : {
- "hostname" : "internetmonitor.eu-south-2.api.aws"
+ "hostname" : "internetmonitor.eu-south-2.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.eu-south-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"eu-west-1" : {
- "hostname" : "internetmonitor.eu-west-1.api.aws"
+ "hostname" : "internetmonitor.eu-west-1.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.eu-west-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"eu-west-2" : {
- "hostname" : "internetmonitor.eu-west-2.api.aws"
+ "hostname" : "internetmonitor.eu-west-2.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.eu-west-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"eu-west-3" : {
- "hostname" : "internetmonitor.eu-west-3.api.aws"
+ "hostname" : "internetmonitor.eu-west-3.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.eu-west-3.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"il-central-1" : {
"hostname" : "internetmonitor.il-central-1.api.aws"
},
"me-central-1" : {
- "hostname" : "internetmonitor.me-central-1.api.aws"
+ "hostname" : "internetmonitor.me-central-1.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.me-central-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"me-south-1" : {
- "hostname" : "internetmonitor.me-south-1.api.aws"
+ "hostname" : "internetmonitor.me-south-1.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.me-south-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"sa-east-1" : {
- "hostname" : "internetmonitor.sa-east-1.api.aws"
+ "hostname" : "internetmonitor.sa-east-1.api.aws",
+ "variants" : [ {
+ "hostname" : "internetmonitor.sa-east-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
"us-east-1" : {
"hostname" : "internetmonitor.us-east-1.api.aws",
"variants" : [ {
"hostname" : "internetmonitor-fips.us-east-1.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "internetmonitor-fips.us-east-1.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "internetmonitor.us-east-1.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
"us-east-2" : {
@@ -10791,6 +11053,12 @@
"variants" : [ {
"hostname" : "internetmonitor-fips.us-east-2.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "internetmonitor-fips.us-east-2.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "internetmonitor.us-east-2.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
"us-west-1" : {
@@ -10798,6 +11066,12 @@
"variants" : [ {
"hostname" : "internetmonitor-fips.us-west-1.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "internetmonitor-fips.us-west-1.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "internetmonitor.us-west-1.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
"us-west-2" : {
@@ -10805,6 +11079,12 @@
"variants" : [ {
"hostname" : "internetmonitor-fips.us-west-2.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "internetmonitor-fips.us-west-2.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "internetmonitor.us-west-2.api.aws",
+ "tags" : [ "dualstack" ]
} ]
}
}
@@ -11495,6 +11775,7 @@
"ap-southeast-2" : { },
"ap-southeast-3" : { },
"ap-southeast-4" : { },
+ "ap-southeast-5" : { },
"ca-central-1" : {
"variants" : [ {
"hostname" : "kafka-fips.ca-central-1.amazonaws.com",
@@ -12310,110 +12591,220 @@
"deprecated" : true,
"hostname" : "kms-fips.me-central-1.amazonaws.com"
},
- "me-south-1" : {
+ "me-south-1" : {
+ "variants" : [ {
+ "hostname" : "kms-fips.me-south-1.amazonaws.com",
+ "tags" : [ "fips" ]
+ } ]
+ },
+ "me-south-1-fips" : {
+ "credentialScope" : {
+ "region" : "me-south-1"
+ },
+ "deprecated" : true,
+ "hostname" : "kms-fips.me-south-1.amazonaws.com"
+ },
+ "sa-east-1" : {
+ "variants" : [ {
+ "hostname" : "kms-fips.sa-east-1.amazonaws.com",
+ "tags" : [ "fips" ]
+ } ]
+ },
+ "sa-east-1-fips" : {
+ "credentialScope" : {
+ "region" : "sa-east-1"
+ },
+ "deprecated" : true,
+ "hostname" : "kms-fips.sa-east-1.amazonaws.com"
+ },
+ "us-east-1" : {
+ "variants" : [ {
+ "hostname" : "kms-fips.us-east-1.amazonaws.com",
+ "tags" : [ "fips" ]
+ } ]
+ },
+ "us-east-1-fips" : {
+ "credentialScope" : {
+ "region" : "us-east-1"
+ },
+ "deprecated" : true,
+ "hostname" : "kms-fips.us-east-1.amazonaws.com"
+ },
+ "us-east-2" : {
+ "variants" : [ {
+ "hostname" : "kms-fips.us-east-2.amazonaws.com",
+ "tags" : [ "fips" ]
+ } ]
+ },
+ "us-east-2-fips" : {
+ "credentialScope" : {
+ "region" : "us-east-2"
+ },
+ "deprecated" : true,
+ "hostname" : "kms-fips.us-east-2.amazonaws.com"
+ },
+ "us-west-1" : {
+ "variants" : [ {
+ "hostname" : "kms-fips.us-west-1.amazonaws.com",
+ "tags" : [ "fips" ]
+ } ]
+ },
+ "us-west-1-fips" : {
+ "credentialScope" : {
+ "region" : "us-west-1"
+ },
+ "deprecated" : true,
+ "hostname" : "kms-fips.us-west-1.amazonaws.com"
+ },
+ "us-west-2" : {
+ "variants" : [ {
+ "hostname" : "kms-fips.us-west-2.amazonaws.com",
+ "tags" : [ "fips" ]
+ } ]
+ },
+ "us-west-2-fips" : {
+ "credentialScope" : {
+ "region" : "us-west-2"
+ },
+ "deprecated" : true,
+ "hostname" : "kms-fips.us-west-2.amazonaws.com"
+ }
+ }
+ },
+ "lakeformation" : {
+ "endpoints" : {
+ "af-south-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.af-south-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-east-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.ap-east-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-northeast-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.ap-northeast-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-northeast-2" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.ap-northeast-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-northeast-3" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.ap-northeast-3.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-south-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.ap-south-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-south-2" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.ap-south-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-southeast-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.ap-southeast-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-southeast-2" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.ap-southeast-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-southeast-3" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.ap-southeast-3.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-southeast-4" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.ap-southeast-4.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "ap-southeast-5" : {
"variants" : [ {
- "hostname" : "kms-fips.me-south-1.amazonaws.com",
- "tags" : [ "fips" ]
+ "hostname" : "lakeformation.ap-southeast-5.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
- "me-south-1-fips" : {
- "credentialScope" : {
- "region" : "me-south-1"
- },
- "deprecated" : true,
- "hostname" : "kms-fips.me-south-1.amazonaws.com"
+ "ca-central-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.ca-central-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
- "sa-east-1" : {
+ "ca-west-1" : {
"variants" : [ {
- "hostname" : "kms-fips.sa-east-1.amazonaws.com",
- "tags" : [ "fips" ]
+ "hostname" : "lakeformation.ca-west-1.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
- "sa-east-1-fips" : {
- "credentialScope" : {
- "region" : "sa-east-1"
- },
- "deprecated" : true,
- "hostname" : "kms-fips.sa-east-1.amazonaws.com"
+ "eu-central-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.eu-central-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
- "us-east-1" : {
+ "eu-central-2" : {
"variants" : [ {
- "hostname" : "kms-fips.us-east-1.amazonaws.com",
- "tags" : [ "fips" ]
+ "hostname" : "lakeformation.eu-central-2.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
- "us-east-1-fips" : {
- "credentialScope" : {
- "region" : "us-east-1"
- },
- "deprecated" : true,
- "hostname" : "kms-fips.us-east-1.amazonaws.com"
+ "eu-north-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.eu-north-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
- "us-east-2" : {
+ "eu-south-1" : {
"variants" : [ {
- "hostname" : "kms-fips.us-east-2.amazonaws.com",
- "tags" : [ "fips" ]
+ "hostname" : "lakeformation.eu-south-1.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
- "us-east-2-fips" : {
- "credentialScope" : {
- "region" : "us-east-2"
- },
- "deprecated" : true,
- "hostname" : "kms-fips.us-east-2.amazonaws.com"
+ "eu-south-2" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.eu-south-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
- "us-west-1" : {
+ "eu-west-1" : {
"variants" : [ {
- "hostname" : "kms-fips.us-west-1.amazonaws.com",
- "tags" : [ "fips" ]
+ "hostname" : "lakeformation.eu-west-1.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
- "us-west-1-fips" : {
- "credentialScope" : {
- "region" : "us-west-1"
- },
- "deprecated" : true,
- "hostname" : "kms-fips.us-west-1.amazonaws.com"
+ "eu-west-2" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.eu-west-2.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
},
- "us-west-2" : {
+ "eu-west-3" : {
"variants" : [ {
- "hostname" : "kms-fips.us-west-2.amazonaws.com",
- "tags" : [ "fips" ]
+ "hostname" : "lakeformation.eu-west-3.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
- "us-west-2-fips" : {
- "credentialScope" : {
- "region" : "us-west-2"
- },
- "deprecated" : true,
- "hostname" : "kms-fips.us-west-2.amazonaws.com"
- }
- }
- },
- "lakeformation" : {
- "endpoints" : {
- "af-south-1" : { },
- "ap-east-1" : { },
- "ap-northeast-1" : { },
- "ap-northeast-2" : { },
- "ap-northeast-3" : { },
- "ap-south-1" : { },
- "ap-south-2" : { },
- "ap-southeast-1" : { },
- "ap-southeast-2" : { },
- "ap-southeast-3" : { },
- "ap-southeast-4" : { },
- "ap-southeast-5" : { },
- "ca-central-1" : { },
- "ca-west-1" : { },
- "eu-central-1" : { },
- "eu-central-2" : { },
- "eu-north-1" : { },
- "eu-south-1" : { },
- "eu-south-2" : { },
- "eu-west-1" : { },
- "eu-west-2" : { },
- "eu-west-3" : { },
"fips-us-east-1" : {
"credentialScope" : {
"region" : "us-east-1"
@@ -12442,32 +12833,76 @@
"deprecated" : true,
"hostname" : "lakeformation-fips.us-west-2.amazonaws.com"
},
- "il-central-1" : { },
- "me-central-1" : { },
- "me-south-1" : { },
- "sa-east-1" : { },
+ "il-central-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.il-central-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "me-central-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.me-central-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "me-south-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.me-south-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "sa-east-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.sa-east-1.api.aws",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
"us-east-1" : {
"variants" : [ {
"hostname" : "lakeformation-fips.us-east-1.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "lakeformation-fips.us-east-1.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "lakeformation.us-east-1.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
"us-east-2" : {
"variants" : [ {
"hostname" : "lakeformation-fips.us-east-2.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "lakeformation-fips.us-east-2.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "lakeformation.us-east-2.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
"us-west-1" : {
"variants" : [ {
"hostname" : "lakeformation-fips.us-west-1.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "lakeformation-fips.us-west-1.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "lakeformation.us-west-1.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
"us-west-2" : {
"variants" : [ {
"hostname" : "lakeformation-fips.us-west-2.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "lakeformation-fips.us-west-2.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "lakeformation.us-west-2.api.aws",
+ "tags" : [ "dualstack" ]
} ]
}
}
@@ -14403,6 +14838,7 @@
"ap-southeast-2" : { },
"ap-southeast-3" : { },
"ap-southeast-4" : { },
+ "ap-southeast-5" : { },
"ca-central-1" : {
"variants" : [ {
"hostname" : "network-firewall-fips.ca-central-1.amazonaws.com",
@@ -22078,6 +22514,28 @@
}
}
},
+ "trustedadvisor" : {
+ "endpoints" : {
+ "fips-us-east-1" : {
+ "credentialScope" : {
+ "region" : "us-east-1"
+ },
+ "hostname" : "trustedadvisor-fips.us-east-1.api.aws"
+ },
+ "fips-us-east-2" : {
+ "credentialScope" : {
+ "region" : "us-east-2"
+ },
+ "hostname" : "trustedadvisor-fips.us-east-2.api.aws"
+ },
+ "fips-us-west-2" : {
+ "credentialScope" : {
+ "region" : "us-west-2"
+ },
+ "hostname" : "trustedadvisor-fips.us-west-2.api.aws"
+ }
+ }
+ },
"verifiedpermissions" : {
"endpoints" : {
"af-south-1" : { },
@@ -22295,10 +22753,12 @@
"ap-southeast-3" : { },
"ap-southeast-4" : { },
"ca-central-1" : { },
+ "ca-west-1" : { },
"eu-central-1" : { },
"eu-central-2" : { },
"eu-north-1" : { },
"eu-south-1" : { },
+ "eu-south-2" : { },
"eu-west-1" : { },
"eu-west-2" : { },
"eu-west-3" : { },
@@ -24006,8 +24466,18 @@
},
"datasync" : {
"endpoints" : {
- "cn-north-1" : { },
- "cn-northwest-1" : { }
+ "cn-north-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.cn-north-1.api.amazonwebservices.com.cn",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "cn-northwest-1" : {
+ "variants" : [ {
+ "hostname" : "datasync.cn-northwest-1.api.amazonwebservices.com.cn",
+ "tags" : [ "dualstack" ]
+ } ]
+ }
}
},
"datazone" : {
@@ -24482,8 +24952,18 @@
},
"lakeformation" : {
"endpoints" : {
- "cn-north-1" : { },
- "cn-northwest-1" : { }
+ "cn-north-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.cn-north-1.api.amazonwebservices.com.cn",
+ "tags" : [ "dualstack" ]
+ } ]
+ },
+ "cn-northwest-1" : {
+ "variants" : [ {
+ "hostname" : "lakeformation.cn-northwest-1.api.amazonwebservices.com.cn",
+ "tags" : [ "dualstack" ]
+ } ]
+ }
}
},
"lambda" : {
@@ -26517,12 +26997,24 @@
"variants" : [ {
"hostname" : "datasync-fips.us-gov-east-1.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "datasync-fips.us-gov-east-1.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "datasync.us-gov-east-1.api.aws",
+ "tags" : [ "dualstack" ]
} ]
},
"us-gov-west-1" : {
"variants" : [ {
"hostname" : "datasync-fips.us-gov-west-1.amazonaws.com",
"tags" : [ "fips" ]
+ }, {
+ "hostname" : "datasync-fips.us-gov-west-1.api.aws",
+ "tags" : [ "dualstack", "fips" ]
+ }, {
+ "hostname" : "datasync.us-gov-west-1.api.aws",
+ "tags" : [ "dualstack" ]
} ]
}
}
@@ -26579,9 +27071,6 @@
"endpoints" : {
"us-gov-east-1" : {
"variants" : [ {
- "hostname" : "dlm-fips.us-gov-east-1.api.aws",
- "tags" : [ "dualstack", "fips" ]
- }, {
"hostname" : "dlm.us-gov-east-1.amazonaws.com",
"tags" : [ "fips" ]
} ]
@@ -26595,9 +27084,6 @@
},
"us-gov-west-1" : {
"variants" : [ {
- "hostname" : "dlm-fips.us-gov-west-1.api.aws",
- "tags" : [ "dualstack", "fips" ]
- }, {
"hostname" : "dlm.us-gov-west-1.amazonaws.com",
"tags" : [ "fips" ]
} ]
@@ -30638,6 +31124,12 @@
}
}
},
+ "codebuild" : {
+ "endpoints" : {
+ "us-iso-east-1" : { },
+ "us-iso-west-1" : { }
+ }
+ },
"codedeploy" : {
"endpoints" : {
"us-iso-east-1" : { },
@@ -30701,18 +31193,8 @@
},
"dlm" : {
"endpoints" : {
- "us-iso-east-1" : {
- "variants" : [ {
- "hostname" : "dlm-fips.us-iso-east-1.api.aws.ic.gov",
- "tags" : [ "dualstack", "fips" ]
- } ]
- },
- "us-iso-west-1" : {
- "variants" : [ {
- "hostname" : "dlm-fips.us-iso-west-1.api.aws.ic.gov",
- "tags" : [ "dualstack", "fips" ]
- } ]
- }
+ "us-iso-east-1" : { },
+ "us-iso-west-1" : { }
}
},
"dms" : {
@@ -31327,6 +31809,12 @@
}
}
},
+ "scheduler" : {
+ "endpoints" : {
+ "us-iso-east-1" : { },
+ "us-iso-west-1" : { }
+ }
+ },
"secretsmanager" : {
"endpoints" : {
"us-iso-east-1" : { },
@@ -31614,12 +32102,7 @@
},
"dlm" : {
"endpoints" : {
- "us-isob-east-1" : {
- "variants" : [ {
- "hostname" : "dlm-fips.us-isob-east-1.api.aws.scloud",
- "tags" : [ "dualstack", "fips" ]
- } ]
- }
+ "us-isob-east-1" : { }
}
},
"dms" : {
@@ -31875,6 +32358,18 @@
"us-isob-east-1" : { }
}
},
+ "organizations" : {
+ "endpoints" : {
+ "aws-iso-b-global" : {
+ "credentialScope" : {
+ "region" : "us-isob-east-1"
+ },
+ "hostname" : "organizations.us-isob-east-1.sc2s.sgov.gov"
+ }
+ },
+ "isRegionalized" : false,
+ "partitionEndpoint" : "aws-iso-b-global"
+ },
"outposts" : {
"endpoints" : {
"us-isob-east-1" : { }
@@ -32032,6 +32527,11 @@
}
}
},
+ "scheduler" : {
+ "endpoints" : {
+ "us-isob-east-1" : { }
+ }
+ },
"secretsmanager" : {
"endpoints" : {
"us-isob-east-1" : { }
diff --git a/config/resolve_test.go b/config/resolve_test.go
index 69e59dc6e02..c8df627bbb8 100644
--- a/config/resolve_test.go
+++ b/config/resolve_test.go
@@ -588,11 +588,11 @@ func TestResolveDefaultsMode(t *testing.T) {
}
if diff := cmpDiff(tt.ExpectedDefaultsMode, cfg.DefaultsMode); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
if diff := cmpDiff(tt.ExpectedRuntimeEnvironment, cfg.RuntimeEnvironment); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
})
}
diff --git a/credentials/ssocreds/sso_credentials_provider_test.go b/credentials/ssocreds/sso_credentials_provider_test.go
index b894a32019e..493535b861d 100644
--- a/credentials/ssocreds/sso_credentials_provider_test.go
+++ b/credentials/ssocreds/sso_credentials_provider_test.go
@@ -189,7 +189,7 @@ func TestProvider(t *testing.T) {
}
if diff := cmpDiff(tt.ExpectedCredentials, credentials); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
})
}
diff --git a/feature/dsql/auth/CHANGELOG.md b/feature/dsql/auth/CHANGELOG.md
new file mode 100644
index 00000000000..b10b7376855
--- /dev/null
+++ b/feature/dsql/auth/CHANGELOG.md
@@ -0,0 +1,4 @@
+# v1.0.0 (2024-12-16)
+
+* **Release**: Add Aurora DSQL Auth Token Generator
+
diff --git a/feature/dsql/auth/LICENSE.txt b/feature/dsql/auth/LICENSE.txt
new file mode 100644
index 00000000000..d6456956733
--- /dev/null
+++ b/feature/dsql/auth/LICENSE.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/feature/dsql/auth/auth_token_generator.go b/feature/dsql/auth/auth_token_generator.go
new file mode 100644
index 00000000000..65709efca6f
--- /dev/null
+++ b/feature/dsql/auth/auth_token_generator.go
@@ -0,0 +1,121 @@
+package auth
+
+import (
+ "context"
+ "fmt"
+ "net/http"
+ "net/url"
+ "strconv"
+ "strings"
+ "time"
+
+ "github.com/aws/aws-sdk-go-v2/aws"
+ v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
+ "github.com/aws/aws-sdk-go-v2/internal/sdk"
+)
+
+const (
+ vendorCode = "dsql"
+ emptyPayloadHash = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
+ userAction = "DbConnect"
+ adminUserAction = "DbConnectAdmin"
+)
+
+// TokenOptions is the optional set of configuration properties for AuthToken
+type TokenOptions struct {
+ ExpiresIn time.Duration
+}
+
+// GenerateDbConnectAuthToken generates an authentication token for IAM authentication to a DSQL database
+//
+// This is the regular user variant, see [GenerateDBConnectAdminAuthToken] for the admin variant
+//
+// * endpoint - Endpoint is the hostname to connect to the database
+// * region - Region is where the database is located
+// * creds - Credentials to use when signing the token
+func GenerateDbConnectAuthToken(ctx context.Context, endpoint, region string, creds aws.CredentialsProvider, optFns ...func(options *TokenOptions)) (string, error) {
+ values := url.Values{
+ "Action": []string{userAction},
+ }
+ return generateAuthToken(ctx, endpoint, region, values, vendorCode, creds, optFns...)
+}
+
+// GenerateDBConnectAdminAuthToken Generates an admin authentication token for IAM authentication to a DSQL database.
+//
+// This is the admin user variant, see [GenerateDbConnectAuthToken] for the regular user variant
+//
+// * endpoint - Endpoint is the hostname to connect to the database
+// * region - Region is where the database is located
+// * creds - Credentials to use when signing the token
+func GenerateDBConnectAdminAuthToken(ctx context.Context, endpoint, region string, creds aws.CredentialsProvider, optFns ...func(options *TokenOptions)) (string, error) {
+ values := url.Values{
+ "Action": []string{adminUserAction},
+ }
+ return generateAuthToken(ctx, endpoint, region, values, vendorCode, creds, optFns...)
+}
+
+// All generate token functions are presigned URLs behind the scenes with the scheme stripped.
+// This function abstracts generating this for all use cases
+func generateAuthToken(ctx context.Context, endpoint, region string, values url.Values, signingID string, creds aws.CredentialsProvider, optFns ...func(options *TokenOptions)) (string, error) {
+ if len(region) == 0 {
+ return "", fmt.Errorf("region is required")
+ }
+ if len(endpoint) == 0 {
+ return "", fmt.Errorf("endpoint is required")
+ }
+
+ o := TokenOptions{}
+
+ for _, fn := range optFns {
+ fn(&o)
+ }
+
+ if o.ExpiresIn == 0 {
+ o.ExpiresIn = 15 * time.Minute
+ }
+
+ if creds == nil {
+ return "", fmt.Errorf("credetials provider must not ne nil")
+ }
+
+ // the scheme is arbitrary and is only needed because validation of the URL requires one.
+ if !(strings.HasPrefix(endpoint, "http://") || strings.HasPrefix(endpoint, "https://")) {
+ endpoint = "https://" + endpoint
+ }
+
+ req, err := http.NewRequest("GET", endpoint, nil)
+ if err != nil {
+ return "", err
+ }
+ req.URL.RawQuery = values.Encode()
+ signer := v4.NewSigner()
+
+ credentials, err := creds.Retrieve(ctx)
+ if err != nil {
+ return "", err
+ }
+
+ expires := o.ExpiresIn
+ // if credentials expire before expiresIn, set that as the expiration time
+ if credentials.CanExpire && !credentials.Expires.IsZero() {
+ credsExpireIn := credentials.Expires.Sub(sdk.NowTime())
+ expires = min(o.ExpiresIn, credsExpireIn)
+ }
+ query := req.URL.Query()
+ query.Set("X-Amz-Expires", strconv.Itoa(int(expires.Seconds())))
+ req.URL.RawQuery = query.Encode()
+
+ signedURI, _, err := signer.PresignHTTP(ctx, credentials, req, emptyPayloadHash, signingID, region, sdk.NowTime().UTC())
+ if err != nil {
+ return "", err
+ }
+
+ url := signedURI
+ if strings.HasPrefix(url, "http://") {
+ url = url[len("http://"):]
+ } else if strings.HasPrefix(url, "https://") {
+ url = url[len("https://"):]
+ }
+
+ return url, nil
+}
diff --git a/feature/dsql/auth/auth_token_generator_test.go b/feature/dsql/auth/auth_token_generator_test.go
new file mode 100644
index 00000000000..32ea9d7b240
--- /dev/null
+++ b/feature/dsql/auth/auth_token_generator_test.go
@@ -0,0 +1,159 @@
+package auth
+
+import (
+ "context"
+ "net/url"
+ "strings"
+ "testing"
+ "time"
+
+ "github.com/aws/aws-sdk-go-v2/aws"
+ "github.com/aws/aws-sdk-go-v2/internal/sdk"
+)
+
+type dbTokenTestCase struct {
+ endpoint string
+ region string
+ expires time.Duration
+ credsExpireIn time.Duration
+ expectedHost string
+ expectedQueryParams []string
+ expectedError string
+}
+
+type tokenGenFunc func(ctx context.Context, endpoint, region string, creds aws.CredentialsProvider, optFns ...func(options *TokenOptions)) (string, error)
+
+func TestGenerateDbConnectAuthToken(t *testing.T) {
+ cases := map[string]dbTokenTestCase{
+ "no region": {
+ endpoint: "https://oo0bar1baz2quux3quuux4.dsql.us-east-1.on.aws",
+ expectedError: "no region",
+ },
+ "no endpoint": {
+ region: "us-west-2",
+ expectedError: "endpoint is required",
+ },
+ "endpoint with scheme": {
+ endpoint: "https://oo0bar1baz2quux3quuux4.dsql.us-east-1.on.aws",
+ region: "us-east-1",
+ expectedHost: "oo0bar1baz2quux3quuux4.dsql.us-east-1.on.aws",
+ expectedQueryParams: []string{"Action=DbConnect"},
+ },
+ "endpoint without scheme": {
+ endpoint: "oo0bar1baz2quux3quuux4.dsql.us-east-1.on.aws",
+ region: "us-east-1",
+ expectedHost: "oo0bar1baz2quux3quuux4.dsql.us-east-1.on.aws",
+ expectedQueryParams: []string{"Action=DbConnect"},
+ },
+ "endpoint with region and expires": {
+ endpoint: "peccy.dsql.us-east-1.on.aws",
+ region: "us-east-1",
+ expires: time.Second * 450,
+ expectedHost: "peccy.dsql.us-east-1.on.aws",
+ expectedQueryParams: []string{
+ "Action=DbConnect",
+ "X-Amz-Algorithm=AWS4-HMAC-SHA256",
+ "X-Amz-Credential=akid/20240827/us-east-1/dsql/aws4_request",
+ "X-Amz-Date=20240827T000000Z",
+ "X-Amz-Expires=450"},
+ },
+ "pick credential expires when less than expires": {
+ endpoint: "peccy.dsql.us-east-1.on.aws",
+ region: "us-east-1",
+ credsExpireIn: time.Second * 100,
+ expires: time.Second * 450,
+ expectedHost: "peccy.dsql.us-east-1.on.aws",
+ expectedQueryParams: []string{
+ "Action=DbConnect",
+ "X-Amz-Algorithm=AWS4-HMAC-SHA256",
+ "X-Amz-Credential=akid/20240827/us-east-1/dsql/aws4_request",
+ "X-Amz-Date=20240827T000000Z",
+ "X-Amz-Expires=100"},
+ },
+ }
+
+ for _, c := range cases {
+ creds := &staticCredentials{AccessKey: "akid", SecretKey: "secret", expiresIn: c.credsExpireIn}
+ defer withTempGlobalTime(time.Date(2024, time.August, 27, 0, 0, 0, 0, time.UTC))()
+ optFns := func(options *TokenOptions) {}
+ if c.expires != 0 {
+ optFns = func(options *TokenOptions) {
+ options.ExpiresIn = c.expires
+ }
+ }
+ verifyTestCase(GenerateDbConnectAuthToken, c, creds, optFns, t)
+
+ // Update the test case to use Admin variant
+ updated := []string{}
+ for _, part := range c.expectedQueryParams {
+ if part == "Action=DbConnect" {
+ part = "Action=DbConnectAdmin"
+ }
+ updated = append(updated, part)
+ }
+ c.expectedQueryParams = updated
+
+ verifyTestCase(GenerateDBConnectAdminAuthToken, c, creds, optFns, t)
+ }
+}
+
+func verifyTestCase(f tokenGenFunc, c dbTokenTestCase, creds aws.CredentialsProvider, optFns func(options *TokenOptions), t *testing.T) {
+ token, err := f(context.Background(), c.endpoint, c.region, creds, optFns)
+ isErrorExpected := len(c.expectedError) > 0
+ if err != nil && !isErrorExpected {
+ t.Fatalf("expect no err, got: %v", err)
+ } else if err == nil && isErrorExpected {
+ t.Fatalf("Expected error %v got none", c.expectedError)
+ }
+ // adding a scheme so we can parse it back as a URL. This is because comparing
+ // just direct string comparison was failing since "Action=DbConnect" is a substring or
+ // "Action=DBConnectAdmin"
+ parsed, err := url.Parse("http://" + token)
+ if err != nil {
+ t.Fatalf("Couldn't parse the token %v to URL after adding a scheme, got: %v", token, err)
+ }
+ if parsed.Host != c.expectedHost {
+ t.Errorf("expect host %v, got %v", c.expectedHost, parsed.Host)
+ }
+
+ q := parsed.Query()
+ queryValuePair := map[string]any{}
+ for k, v := range q {
+ pair := k + "=" + v[0]
+ queryValuePair[pair] = struct{}{}
+ }
+
+ for _, part := range c.expectedQueryParams {
+ if _, ok := queryValuePair[part]; !ok {
+ t.Errorf("expect part %s to be present at token %s", part, token)
+ }
+ }
+ if token != "" && c.expires == 0 {
+ if !strings.Contains(token, "X-Amz-Expires=900") {
+ t.Errorf("expect token to contain default X-Amz-Expires value of 900, got %v", token)
+ }
+ }
+}
+
+type staticCredentials struct {
+ AccessKey, SecretKey, Session string
+ expiresIn time.Duration
+}
+
+func (s *staticCredentials) Retrieve(ctx context.Context) (aws.Credentials, error) {
+ c := aws.Credentials{
+ AccessKeyID: s.AccessKey,
+ SecretAccessKey: s.SecretKey,
+ SessionToken: s.Session,
+ }
+ if s.expiresIn != 0 {
+ c.CanExpire = true
+ c.Expires = sdk.NowTime().Add(s.expiresIn)
+ }
+ return c, nil
+}
+
+func withTempGlobalTime(t time.Time) func() {
+ sdk.NowTime = func() time.Time { return t }
+ return func() { sdk.NowTime = time.Now }
+}
diff --git a/feature/dsql/auth/doc.go b/feature/dsql/auth/doc.go
new file mode 100644
index 00000000000..35599b6e684
--- /dev/null
+++ b/feature/dsql/auth/doc.go
@@ -0,0 +1,9 @@
+// Package auth is used to generate authentication tokens for Amazon Aurora DSQL.
+//
+// These tokens use IAM policies to generate a token that will be used to connect
+// to a database.
+//
+// You can see more details about it at the [official docs]
+//
+// [official docs]: https://docs.aws.amazon.com/aurora-dsql/latest/userguide/SECTION_authentication-token.html
+package auth
diff --git a/feature/dsql/auth/go.mod b/feature/dsql/auth/go.mod
new file mode 100644
index 00000000000..319166bc66e
--- /dev/null
+++ b/feature/dsql/auth/go.mod
@@ -0,0 +1,9 @@
+module github.com/aws/aws-sdk-go-v2/feature/dsql/auth
+
+go 1.21
+
+require github.com/aws/aws-sdk-go-v2 v1.32.6
+
+require github.com/aws/smithy-go v1.22.1 // indirect
+
+replace github.com/aws/aws-sdk-go-v2 => ../../../
diff --git a/feature/dsql/auth/go.sum b/feature/dsql/auth/go.sum
new file mode 100644
index 00000000000..bd2678891af
--- /dev/null
+++ b/feature/dsql/auth/go.sum
@@ -0,0 +1,2 @@
+github.com/aws/smithy-go v1.22.1 h1:/HPHZQ0g7f4eUeK6HKglFz8uwVfZKgoI25rb/J+dnro=
+github.com/aws/smithy-go v1.22.1/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
diff --git a/feature/dsql/auth/go_module_metadata.go b/feature/dsql/auth/go_module_metadata.go
new file mode 100644
index 00000000000..b976717bfac
--- /dev/null
+++ b/feature/dsql/auth/go_module_metadata.go
@@ -0,0 +1,6 @@
+// Code generated by internal/repotools/cmd/updatemodulemeta DO NOT EDIT.
+
+package auth
+
+// goModuleVersion is the tagged release for this module
+const goModuleVersion = "1.0.0"
diff --git a/feature/s3/manager/upload.go b/feature/s3/manager/upload.go
index 15861381abb..c6d04f1e99b 100644
--- a/feature/s3/manager/upload.go
+++ b/feature/s3/manager/upload.go
@@ -3,6 +3,7 @@ package manager
import (
"bytes"
"context"
+ "errors"
"fmt"
"io"
"net/http"
@@ -10,7 +11,6 @@ import (
"sync"
"github.com/aws/aws-sdk-go-v2/aws"
-
"github.com/aws/aws-sdk-go-v2/aws/middleware"
"github.com/aws/aws-sdk-go-v2/internal/awsutil"
internalcontext "github.com/aws/aws-sdk-go-v2/internal/context"
@@ -688,7 +688,7 @@ func (u *multiuploader) shouldContinue(part int32, nextChunkLen int, err error)
msg = fmt.Sprintf("exceeded total allowed S3 limit MaxUploadParts (%d). Adjust PartSize to fit in this limit",
MaxUploadParts)
}
- return false, fmt.Errorf(msg)
+ return false, errors.New(msg)
}
return true, err
diff --git a/internal/awstesting/assert.go b/internal/awstesting/assert.go
index 9b884c3def4..1d9bfb191ad 100644
--- a/internal/awstesting/assert.go
+++ b/internal/awstesting/assert.go
@@ -28,12 +28,12 @@ func AssertURL(t *testing.T, expect, actual string, msgAndArgs ...interface{}) b
expectURL, err := url.Parse(expect)
if err != nil {
- t.Errorf(errMsg("unable to parse expected URL", err, msgAndArgs))
+ t.Error(errMsg("unable to parse expected URL", err, msgAndArgs))
return false
}
actualURL, err := url.Parse(actual)
if err != nil {
- t.Errorf(errMsg("unable to parse actual URL", err, msgAndArgs))
+ t.Error(errMsg("unable to parse actual URL", err, msgAndArgs))
return false
}
@@ -52,12 +52,12 @@ func AssertQuery(t *testing.T, expect, actual string, msgAndArgs ...interface{})
expectQ, err := url.ParseQuery(expect)
if err != nil {
- t.Errorf(errMsg("unable to parse expected Query", err, msgAndArgs))
+ t.Error(errMsg("unable to parse expected Query", err, msgAndArgs))
return false
}
actualQ, err := url.ParseQuery(actual)
if err != nil {
- t.Errorf(errMsg("unable to parse actual Query", err, msgAndArgs))
+ t.Error(errMsg("unable to parse actual Query", err, msgAndArgs))
return false
}
@@ -106,13 +106,13 @@ func AssertJSON(t *testing.T, expect, actual string, msgAndArgs ...interface{})
expectVal := map[string]interface{}{}
if err := json.Unmarshal([]byte(expect), &expectVal); err != nil {
- t.Errorf(errMsg("unable to parse expected JSON", err, msgAndArgs...))
+ t.Error(errMsg("unable to parse expected JSON", err, msgAndArgs...))
return false
}
actualVal := map[string]interface{}{}
if err := json.Unmarshal([]byte(actual), &actualVal); err != nil {
- t.Errorf(errMsg("unable to parse actual JSON", err, msgAndArgs...))
+ t.Error(errMsg("unable to parse actual JSON", err, msgAndArgs...))
return false
}
@@ -123,12 +123,12 @@ func AssertJSON(t *testing.T, expect, actual string, msgAndArgs ...interface{})
func AssertXML(t *testing.T, expect, actual string, container interface{}, msgAndArgs ...interface{}) bool {
expectVal := container
if err := xml.Unmarshal([]byte(expect), &expectVal); err != nil {
- t.Errorf(errMsg("unable to parse expected XML", err, msgAndArgs...))
+ t.Error(errMsg("unable to parse expected XML", err, msgAndArgs...))
}
actualVal := container
if err := xml.Unmarshal([]byte(actual), &actualVal); err != nil {
- t.Errorf(errMsg("unable to parse actual XML", err, msgAndArgs...))
+ t.Error(errMsg("unable to parse actual XML", err, msgAndArgs...))
}
return equal(t, expectVal, actualVal, msgAndArgs...)
}
diff --git a/internal/endpoints/v2/package_test.go b/internal/endpoints/v2/package_test.go
index 07e02bd1500..b46c017e2aa 100644
--- a/internal/endpoints/v2/package_test.go
+++ b/internal/endpoints/v2/package_test.go
@@ -1303,7 +1303,7 @@ func TestLogDeprecated(t *testing.T) {
Logger: log.New(buffer, "", 0),
}, func(t *testing.T) {
if diff := cmpDiff("WARN endpoint identifier \"bar\", url \"https://foo.bar.bar.tld\" marked as deprecated\n", buffer.String()); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
}
},
@@ -1326,7 +1326,7 @@ func TestLogDeprecated(t *testing.T) {
Logger: log.New(buffer, "", 0),
}, func(t *testing.T) {
if diff := cmpDiff("WARN endpoint identifier \"bar\", url \"https://foo-fips.bar.bar.tld\" marked as deprecated\n", buffer.String()); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
}
},
@@ -1352,7 +1352,7 @@ func TestLogDeprecated(t *testing.T) {
}
if diff := cmpDiff(tt.Expected, endpoint); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
if verifyLog != nil {
diff --git a/service/account/CHANGELOG.md b/service/account/CHANGELOG.md
index 2e2ad0a4760..92372f2eb9f 100644
--- a/service/account/CHANGELOG.md
+++ b/service/account/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.22.0 (2024-12-17)
+
+* **Feature**: Update endpoint configuration.
+
# v1.21.7 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/account/endpoints.go b/service/account/endpoints.go
index c214f554189..8379752c79d 100644
--- a/service/account/endpoints.go
+++ b/service/account/endpoints.go
@@ -228,14 +228,6 @@ func bindRegion(region string) *string {
// EndpointParameters provides the parameters that influence how endpoints are
// resolved.
type EndpointParameters struct {
- // The AWS region used to dispatch the request.
- //
- // Parameter is
- // required.
- //
- // AWS::Region
- Region *string
-
// When true, use the dual-stack endpoint. If the configured endpoint does not
// support dual-stack, dispatching the request MAY return an error.
//
@@ -262,6 +254,14 @@ type EndpointParameters struct {
//
// SDK::Endpoint
Endpoint *string
+
+ // The AWS region used to dispatch the request.
+ //
+ // Parameter is
+ // required.
+ //
+ // AWS::Region
+ Region *string
}
// ValidateRequired validates required parameters are set.
@@ -358,10 +358,58 @@ func (r *resolver) ResolveEndpoint(
if exprVal := awsrulesfn.GetPartition(_Region); exprVal != nil {
_PartitionResult := *exprVal
_ = _PartitionResult
- if _PartitionResult.Name == "aws" {
- if _UseFIPS == false {
- if _UseDualStack == false {
- uriString := "https://account.us-east-1.amazonaws.com"
+ if _UseFIPS == true {
+ if _UseDualStack == true {
+ if true == _PartitionResult.SupportsFIPS {
+ if true == _PartitionResult.SupportsDualStack {
+ uriString := func() string {
+ var out strings.Builder
+ out.WriteString("https://account-fips.")
+ out.WriteString(_PartitionResult.ImplicitGlobalRegion)
+ out.WriteString(".")
+ out.WriteString(_PartitionResult.DualStackDnsSuffix)
+ return out.String()
+ }()
+
+ uri, err := url.Parse(uriString)
+ if err != nil {
+ return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+ }
+
+ return smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, _PartitionResult.ImplicitGlobalRegion)
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }, nil
+ }
+ }
+ return endpoint, fmt.Errorf("endpoint rule error, %s", "FIPS and DualStack are enabled, but this partition does not support one or both")
+ }
+ }
+ if _UseFIPS == true {
+ if _UseDualStack == false {
+ if _PartitionResult.SupportsFIPS == true {
+ uriString := func() string {
+ var out strings.Builder
+ out.WriteString("https://account-fips.")
+ out.WriteString(_PartitionResult.ImplicitGlobalRegion)
+ out.WriteString(".")
+ out.WriteString(_PartitionResult.DnsSuffix)
+ return out.String()
+ }()
uri, err := url.Parse(uriString)
if err != nil {
@@ -378,10 +426,7 @@ func (r *resolver) ResolveEndpoint(
SchemeID: "aws.auth#sigv4",
SignerProperties: func() smithy.Properties {
var sp smithy.Properties
- smithyhttp.SetSigV4SigningName(&sp, "account")
- smithyhttp.SetSigV4ASigningName(&sp, "account")
-
- smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+ smithyhttp.SetSigV4SigningRegion(&sp, _PartitionResult.ImplicitGlobalRegion)
return sp
}(),
},
@@ -390,12 +435,20 @@ func (r *resolver) ResolveEndpoint(
}(),
}, nil
}
+ return endpoint, fmt.Errorf("endpoint rule error, %s", "FIPS is enabled but this partition does not support FIPS")
}
}
- if _PartitionResult.Name == "aws-cn" {
- if _UseFIPS == false {
- if _UseDualStack == false {
- uriString := "https://account.cn-northwest-1.amazonaws.com.cn"
+ if _UseFIPS == false {
+ if _UseDualStack == true {
+ if true == _PartitionResult.SupportsDualStack {
+ uriString := func() string {
+ var out strings.Builder
+ out.WriteString("https://account.")
+ out.WriteString(_PartitionResult.ImplicitGlobalRegion)
+ out.WriteString(".")
+ out.WriteString(_PartitionResult.DualStackDnsSuffix)
+ return out.String()
+ }()
uri, err := url.Parse(uriString)
if err != nil {
@@ -412,10 +465,7 @@ func (r *resolver) ResolveEndpoint(
SchemeID: "aws.auth#sigv4",
SignerProperties: func() smithy.Properties {
var sp smithy.Properties
- smithyhttp.SetSigV4SigningName(&sp, "account")
- smithyhttp.SetSigV4ASigningName(&sp, "account")
-
- smithyhttp.SetSigV4SigningRegion(&sp, "cn-northwest-1")
+ smithyhttp.SetSigV4SigningRegion(&sp, _PartitionResult.ImplicitGlobalRegion)
return sp
}(),
},
@@ -424,85 +474,13 @@ func (r *resolver) ResolveEndpoint(
}(),
}, nil
}
+ return endpoint, fmt.Errorf("endpoint rule error, %s", "DualStack is enabled but this partition does not support DualStack")
}
}
- if _UseFIPS == true {
- if _UseDualStack == true {
- if true == _PartitionResult.SupportsFIPS {
- if true == _PartitionResult.SupportsDualStack {
- uriString := func() string {
- var out strings.Builder
- out.WriteString("https://account-fips.")
- out.WriteString(_Region)
- out.WriteString(".")
- out.WriteString(_PartitionResult.DualStackDnsSuffix)
- return out.String()
- }()
-
- uri, err := url.Parse(uriString)
- if err != nil {
- return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
- }
-
- return smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- }, nil
- }
- }
- return endpoint, fmt.Errorf("endpoint rule error, %s", "FIPS and DualStack are enabled, but this partition does not support one or both")
- }
- }
- if _UseFIPS == true {
- if _PartitionResult.SupportsFIPS == true {
- uriString := func() string {
- var out strings.Builder
- out.WriteString("https://account-fips.")
- out.WriteString(_Region)
- out.WriteString(".")
- out.WriteString(_PartitionResult.DnsSuffix)
- return out.String()
- }()
-
- uri, err := url.Parse(uriString)
- if err != nil {
- return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
- }
-
- return smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- }, nil
- }
- return endpoint, fmt.Errorf("endpoint rule error, %s", "FIPS is enabled but this partition does not support FIPS")
- }
- if _UseDualStack == true {
- if true == _PartitionResult.SupportsDualStack {
- uriString := func() string {
- var out strings.Builder
- out.WriteString("https://account.")
- out.WriteString(_Region)
- out.WriteString(".")
- out.WriteString(_PartitionResult.DualStackDnsSuffix)
- return out.String()
- }()
-
- uri, err := url.Parse(uriString)
- if err != nil {
- return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
- }
-
- return smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- }, nil
- }
- return endpoint, fmt.Errorf("endpoint rule error, %s", "DualStack is enabled but this partition does not support DualStack")
- }
uriString := func() string {
var out strings.Builder
out.WriteString("https://account.")
- out.WriteString(_Region)
+ out.WriteString(_PartitionResult.ImplicitGlobalRegion)
out.WriteString(".")
out.WriteString(_PartitionResult.DnsSuffix)
return out.String()
@@ -516,6 +494,20 @@ func (r *resolver) ResolveEndpoint(
return smithyendpoints.Endpoint{
URI: *uri,
Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, _PartitionResult.ImplicitGlobalRegion)
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}, nil
}
return endpoint, fmt.Errorf("Endpoint resolution failed. Invalid operation or environment input.")
@@ -530,10 +522,10 @@ type endpointParamsBinder interface {
func bindEndpointParams(ctx context.Context, input interface{}, options Options) *EndpointParameters {
params := &EndpointParameters{}
- params.Region = bindRegion(options.Region)
params.UseDualStack = aws.Bool(options.EndpointOptions.UseDualStackEndpoint == aws.DualStackEndpointStateEnabled)
params.UseFIPS = aws.Bool(options.EndpointOptions.UseFIPSEndpoint == aws.FIPSEndpointStateEnabled)
params.Endpoint = options.BaseEndpoint
+ params.Region = bindRegion(options.Region)
if b, ok := input.(endpointParamsBinder); ok {
b.bindEndpointParams(params)
diff --git a/service/account/endpoints_test.go b/service/account/endpoints_test.go
index df621ab0e93..65fb232dc20 100644
--- a/service/account/endpoints_test.go
+++ b/service/account/endpoints_test.go
@@ -16,12 +16,11 @@ import (
"testing"
)
-// For region aws-global with FIPS disabled and DualStack disabled
+// For custom endpoint with region not set and fips disabled
func TestEndpointCase0(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("aws-global"),
- UseFIPS: ptr.Bool(false),
- UseDualStack: ptr.Bool(false),
+ Endpoint: ptr.String("https://example.com"),
+ UseFIPS: ptr.Bool(false),
}
resolver := NewDefaultEndpointResolverV2()
@@ -32,28 +31,12 @@ func TestEndpointCase0(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://account.us-east-1.amazonaws.com")
+ uri, _ := url.Parse("https://example.com")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: func() smithy.Properties {
- var out smithy.Properties
- smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
- {
- SchemeID: "aws.auth#sigv4",
- SignerProperties: func() smithy.Properties {
- var sp smithy.Properties
- smithyhttp.SetSigV4SigningName(&sp, "account")
- smithyhttp.SetSigV4ASigningName(&sp, "account")
-
- smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
- return sp
- }(),
- },
- })
- return out
- }(),
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: smithy.Properties{},
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -69,49 +52,51 @@ func TestEndpointCase0(t *testing.T) {
}
}
-// For region us-east-1 with FIPS enabled and DualStack enabled
+// For custom endpoint with fips enabled
func TestEndpointCase1(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("us-east-1"),
- UseFIPS: ptr.Bool(true),
- UseDualStack: ptr.Bool(true),
+ Endpoint: ptr.String("https://example.com"),
+ UseFIPS: ptr.Bool(true),
}
resolver := NewDefaultEndpointResolverV2()
result, err := resolver.ResolveEndpoint(context.Background(), params)
_, _ = result, err
- if err != nil {
- t.Fatalf("expect no error, got %v", err)
+ if err == nil {
+ t.Fatalf("expect error, got none")
}
-
- uri, _ := url.Parse("https://account-fips.us-east-1.api.aws")
-
- expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ if e, a := "Invalid Configuration: FIPS and custom endpoint are not supported", err.Error(); !strings.Contains(a, e) {
+ t.Errorf("expect %v error in %v", e, a)
}
+}
- if e, a := expectEndpoint.URI, result.URI; e != a {
- t.Errorf("expect %v URI, got %v", e, a)
+// For custom endpoint with fips disabled and dualstack enabled
+func TestEndpointCase2(t *testing.T) {
+ var params = EndpointParameters{
+ Endpoint: ptr.String("https://example.com"),
+ UseFIPS: ptr.Bool(false),
+ UseDualStack: ptr.Bool(true),
}
- if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
- t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
- }
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
- if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
- t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ if err == nil {
+ t.Fatalf("expect error, got none")
+ }
+ if e, a := "Invalid Configuration: Dualstack and custom endpoint are not supported", err.Error(); !strings.Contains(a, e) {
+ t.Errorf("expect %v error in %v", e, a)
}
}
-// For region us-east-1 with FIPS enabled and DualStack disabled
-func TestEndpointCase2(t *testing.T) {
+// For region us-east-1 with FIPS enabled and DualStack enabled
+func TestEndpointCase3(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-east-1"),
UseFIPS: ptr.Bool(true),
- UseDualStack: ptr.Bool(false),
+ UseDualStack: ptr.Bool(true),
}
resolver := NewDefaultEndpointResolverV2()
@@ -122,12 +107,25 @@ func TestEndpointCase2(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://account-fips.us-east-1.amazonaws.com")
+ uri, _ := url.Parse("https://account-fips.us-east-1.api.aws")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -143,12 +141,12 @@ func TestEndpointCase2(t *testing.T) {
}
}
-// For region us-east-1 with FIPS disabled and DualStack enabled
-func TestEndpointCase3(t *testing.T) {
+// For region us-east-1 with FIPS enabled and DualStack disabled
+func TestEndpointCase4(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-east-1"),
- UseFIPS: ptr.Bool(false),
- UseDualStack: ptr.Bool(true),
+ UseFIPS: ptr.Bool(true),
+ UseDualStack: ptr.Bool(false),
}
resolver := NewDefaultEndpointResolverV2()
@@ -159,12 +157,25 @@ func TestEndpointCase3(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://account.us-east-1.api.aws")
+ uri, _ := url.Parse("https://account-fips.us-east-1.amazonaws.com")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -180,12 +191,12 @@ func TestEndpointCase3(t *testing.T) {
}
}
-// For region us-east-1 with FIPS disabled and DualStack disabled
-func TestEndpointCase4(t *testing.T) {
+// For region us-east-1 with FIPS disabled and DualStack enabled
+func TestEndpointCase5(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-east-1"),
UseFIPS: ptr.Bool(false),
- UseDualStack: ptr.Bool(false),
+ UseDualStack: ptr.Bool(true),
}
resolver := NewDefaultEndpointResolverV2()
@@ -196,7 +207,7 @@ func TestEndpointCase4(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://account.us-east-1.amazonaws.com")
+ uri, _ := url.Parse("https://account.us-east-1.api.aws")
expectEndpoint := smithyendpoints.Endpoint{
URI: *uri,
@@ -208,9 +219,6 @@ func TestEndpointCase4(t *testing.T) {
SchemeID: "aws.auth#sigv4",
SignerProperties: func() smithy.Properties {
var sp smithy.Properties
- smithyhttp.SetSigV4SigningName(&sp, "account")
- smithyhttp.SetSigV4ASigningName(&sp, "account")
-
smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
return sp
}(),
@@ -233,10 +241,10 @@ func TestEndpointCase4(t *testing.T) {
}
}
-// For region aws-cn-global with FIPS disabled and DualStack disabled
-func TestEndpointCase5(t *testing.T) {
+// For region us-east-1 with FIPS disabled and DualStack disabled
+func TestEndpointCase6(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("aws-cn-global"),
+ Region: ptr.String("us-east-1"),
UseFIPS: ptr.Bool(false),
UseDualStack: ptr.Bool(false),
}
@@ -249,7 +257,7 @@ func TestEndpointCase5(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://account.cn-northwest-1.amazonaws.com.cn")
+ uri, _ := url.Parse("https://account.us-east-1.amazonaws.com")
expectEndpoint := smithyendpoints.Endpoint{
URI: *uri,
@@ -261,10 +269,7 @@ func TestEndpointCase5(t *testing.T) {
SchemeID: "aws.auth#sigv4",
SignerProperties: func() smithy.Properties {
var sp smithy.Properties
- smithyhttp.SetSigV4SigningName(&sp, "account")
- smithyhttp.SetSigV4ASigningName(&sp, "account")
-
- smithyhttp.SetSigV4SigningRegion(&sp, "cn-northwest-1")
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
return sp
}(),
},
@@ -286,10 +291,10 @@ func TestEndpointCase5(t *testing.T) {
}
}
-// For region cn-north-1 with FIPS enabled and DualStack enabled
-func TestEndpointCase6(t *testing.T) {
+// For region cn-northwest-1 with FIPS enabled and DualStack enabled
+func TestEndpointCase7(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("cn-north-1"),
+ Region: ptr.String("cn-northwest-1"),
UseFIPS: ptr.Bool(true),
UseDualStack: ptr.Bool(true),
}
@@ -302,12 +307,25 @@ func TestEndpointCase6(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://account-fips.cn-north-1.api.amazonwebservices.com.cn")
+ uri, _ := url.Parse("https://account-fips.cn-northwest-1.api.amazonwebservices.com.cn")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "cn-northwest-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -323,10 +341,10 @@ func TestEndpointCase6(t *testing.T) {
}
}
-// For region cn-north-1 with FIPS enabled and DualStack disabled
-func TestEndpointCase7(t *testing.T) {
+// For region cn-northwest-1 with FIPS enabled and DualStack disabled
+func TestEndpointCase8(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("cn-north-1"),
+ Region: ptr.String("cn-northwest-1"),
UseFIPS: ptr.Bool(true),
UseDualStack: ptr.Bool(false),
}
@@ -339,12 +357,25 @@ func TestEndpointCase7(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://account-fips.cn-north-1.amazonaws.com.cn")
+ uri, _ := url.Parse("https://account-fips.cn-northwest-1.amazonaws.com.cn")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "cn-northwest-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -360,10 +391,10 @@ func TestEndpointCase7(t *testing.T) {
}
}
-// For region cn-north-1 with FIPS disabled and DualStack enabled
-func TestEndpointCase8(t *testing.T) {
+// For region cn-northwest-1 with FIPS disabled and DualStack enabled
+func TestEndpointCase9(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("cn-north-1"),
+ Region: ptr.String("cn-northwest-1"),
UseFIPS: ptr.Bool(false),
UseDualStack: ptr.Bool(true),
}
@@ -376,12 +407,25 @@ func TestEndpointCase8(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://account.cn-north-1.api.amazonwebservices.com.cn")
+ uri, _ := url.Parse("https://account.cn-northwest-1.api.amazonwebservices.com.cn")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "cn-northwest-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -397,10 +441,10 @@ func TestEndpointCase8(t *testing.T) {
}
}
-// For region cn-north-1 with FIPS disabled and DualStack disabled
-func TestEndpointCase9(t *testing.T) {
+// For region cn-northwest-1 with FIPS disabled and DualStack disabled
+func TestEndpointCase10(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("cn-north-1"),
+ Region: ptr.String("cn-northwest-1"),
UseFIPS: ptr.Bool(false),
UseDualStack: ptr.Bool(false),
}
@@ -425,9 +469,6 @@ func TestEndpointCase9(t *testing.T) {
SchemeID: "aws.auth#sigv4",
SignerProperties: func() smithy.Properties {
var sp smithy.Properties
- smithyhttp.SetSigV4SigningName(&sp, "account")
- smithyhttp.SetSigV4ASigningName(&sp, "account")
-
smithyhttp.SetSigV4SigningRegion(&sp, "cn-northwest-1")
return sp
}(),
@@ -450,10 +491,10 @@ func TestEndpointCase9(t *testing.T) {
}
}
-// For region us-gov-east-1 with FIPS enabled and DualStack enabled
-func TestEndpointCase10(t *testing.T) {
+// For region us-gov-west-1 with FIPS enabled and DualStack enabled
+func TestEndpointCase11(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("us-gov-east-1"),
+ Region: ptr.String("us-gov-west-1"),
UseFIPS: ptr.Bool(true),
UseDualStack: ptr.Bool(true),
}
@@ -466,12 +507,25 @@ func TestEndpointCase10(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://account-fips.us-gov-east-1.api.aws")
+ uri, _ := url.Parse("https://account-fips.us-gov-west-1.api.aws")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-gov-west-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -487,10 +541,10 @@ func TestEndpointCase10(t *testing.T) {
}
}
-// For region us-gov-east-1 with FIPS enabled and DualStack disabled
-func TestEndpointCase11(t *testing.T) {
+// For region us-gov-west-1 with FIPS enabled and DualStack disabled
+func TestEndpointCase12(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("us-gov-east-1"),
+ Region: ptr.String("us-gov-west-1"),
UseFIPS: ptr.Bool(true),
UseDualStack: ptr.Bool(false),
}
@@ -503,12 +557,25 @@ func TestEndpointCase11(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://account-fips.us-gov-east-1.amazonaws.com")
+ uri, _ := url.Parse("https://account-fips.us-gov-west-1.amazonaws.com")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-gov-west-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -524,10 +591,10 @@ func TestEndpointCase11(t *testing.T) {
}
}
-// For region us-gov-east-1 with FIPS disabled and DualStack enabled
-func TestEndpointCase12(t *testing.T) {
+// For region us-gov-west-1 with FIPS disabled and DualStack enabled
+func TestEndpointCase13(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("us-gov-east-1"),
+ Region: ptr.String("us-gov-west-1"),
UseFIPS: ptr.Bool(false),
UseDualStack: ptr.Bool(true),
}
@@ -540,20 +607,33 @@ func TestEndpointCase12(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://account.us-gov-east-1.api.aws")
+ uri, _ := url.Parse("https://account.us-gov-west-1.api.aws")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
- }
-
- if e, a := expectEndpoint.URI, result.URI; e != a {
- t.Errorf("expect %v URI, got %v", e, a)
- }
-
- if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
- t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-gov-west-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
}
if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
@@ -561,10 +641,10 @@ func TestEndpointCase12(t *testing.T) {
}
}
-// For region us-gov-east-1 with FIPS disabled and DualStack disabled
-func TestEndpointCase13(t *testing.T) {
+// For region us-gov-west-1 with FIPS disabled and DualStack disabled
+func TestEndpointCase14(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("us-gov-east-1"),
+ Region: ptr.String("us-gov-west-1"),
UseFIPS: ptr.Bool(false),
UseDualStack: ptr.Bool(false),
}
@@ -577,12 +657,25 @@ func TestEndpointCase13(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://account.us-gov-east-1.amazonaws.com")
+ uri, _ := url.Parse("https://account.us-gov-west-1.amazonaws.com")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-gov-west-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -599,7 +692,7 @@ func TestEndpointCase13(t *testing.T) {
}
// For region us-iso-east-1 with FIPS enabled and DualStack enabled
-func TestEndpointCase14(t *testing.T) {
+func TestEndpointCase15(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-iso-east-1"),
UseFIPS: ptr.Bool(true),
@@ -619,7 +712,7 @@ func TestEndpointCase14(t *testing.T) {
}
// For region us-iso-east-1 with FIPS enabled and DualStack disabled
-func TestEndpointCase15(t *testing.T) {
+func TestEndpointCase16(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-iso-east-1"),
UseFIPS: ptr.Bool(true),
@@ -637,9 +730,22 @@ func TestEndpointCase15(t *testing.T) {
uri, _ := url.Parse("https://account-fips.us-iso-east-1.c2s.ic.gov")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-iso-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -656,7 +762,7 @@ func TestEndpointCase15(t *testing.T) {
}
// For region us-iso-east-1 with FIPS disabled and DualStack enabled
-func TestEndpointCase16(t *testing.T) {
+func TestEndpointCase17(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-iso-east-1"),
UseFIPS: ptr.Bool(false),
@@ -676,7 +782,7 @@ func TestEndpointCase16(t *testing.T) {
}
// For region us-iso-east-1 with FIPS disabled and DualStack disabled
-func TestEndpointCase17(t *testing.T) {
+func TestEndpointCase18(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-iso-east-1"),
UseFIPS: ptr.Bool(false),
@@ -694,9 +800,22 @@ func TestEndpointCase17(t *testing.T) {
uri, _ := url.Parse("https://account.us-iso-east-1.c2s.ic.gov")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-iso-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -713,7 +832,7 @@ func TestEndpointCase17(t *testing.T) {
}
// For region us-isob-east-1 with FIPS enabled and DualStack enabled
-func TestEndpointCase18(t *testing.T) {
+func TestEndpointCase19(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-isob-east-1"),
UseFIPS: ptr.Bool(true),
@@ -733,7 +852,7 @@ func TestEndpointCase18(t *testing.T) {
}
// For region us-isob-east-1 with FIPS enabled and DualStack disabled
-func TestEndpointCase19(t *testing.T) {
+func TestEndpointCase20(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-isob-east-1"),
UseFIPS: ptr.Bool(true),
@@ -751,9 +870,22 @@ func TestEndpointCase19(t *testing.T) {
uri, _ := url.Parse("https://account-fips.us-isob-east-1.sc2s.sgov.gov")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-isob-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -770,7 +902,7 @@ func TestEndpointCase19(t *testing.T) {
}
// For region us-isob-east-1 with FIPS disabled and DualStack enabled
-func TestEndpointCase20(t *testing.T) {
+func TestEndpointCase21(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-isob-east-1"),
UseFIPS: ptr.Bool(false),
@@ -790,7 +922,7 @@ func TestEndpointCase20(t *testing.T) {
}
// For region us-isob-east-1 with FIPS disabled and DualStack disabled
-func TestEndpointCase21(t *testing.T) {
+func TestEndpointCase22(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-isob-east-1"),
UseFIPS: ptr.Bool(false),
@@ -808,9 +940,22 @@ func TestEndpointCase21(t *testing.T) {
uri, _ := url.Parse("https://account.us-isob-east-1.sc2s.sgov.gov")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-isob-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -826,13 +971,32 @@ func TestEndpointCase21(t *testing.T) {
}
}
-// For custom endpoint with region set and fips disabled and dualstack disabled
-func TestEndpointCase22(t *testing.T) {
+// For region eu-isoe-west-1 with FIPS enabled and DualStack enabled
+func TestEndpointCase23(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("us-east-1"),
- UseFIPS: ptr.Bool(false),
+ Region: ptr.String("eu-isoe-west-1"),
+ UseFIPS: ptr.Bool(true),
+ UseDualStack: ptr.Bool(true),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err == nil {
+ t.Fatalf("expect error, got none")
+ }
+ if e, a := "FIPS and DualStack are enabled, but this partition does not support one or both", err.Error(); !strings.Contains(a, e) {
+ t.Errorf("expect %v error in %v", e, a)
+ }
+}
+
+// For region eu-isoe-west-1 with FIPS enabled and DualStack disabled
+func TestEndpointCase24(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("eu-isoe-west-1"),
+ UseFIPS: ptr.Bool(true),
UseDualStack: ptr.Bool(false),
- Endpoint: ptr.String("https://example.com"),
}
resolver := NewDefaultEndpointResolverV2()
@@ -843,12 +1007,25 @@ func TestEndpointCase22(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://example.com")
+ uri, _ := url.Parse("https://account-fips.eu-isoe-west-1.cloud.adc-e.uk")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "eu-isoe-west-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -864,12 +1041,32 @@ func TestEndpointCase22(t *testing.T) {
}
}
-// For custom endpoint with region not set and fips disabled and dualstack disabled
-func TestEndpointCase23(t *testing.T) {
+// For region eu-isoe-west-1 with FIPS disabled and DualStack enabled
+func TestEndpointCase25(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("eu-isoe-west-1"),
+ UseFIPS: ptr.Bool(false),
+ UseDualStack: ptr.Bool(true),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err == nil {
+ t.Fatalf("expect error, got none")
+ }
+ if e, a := "DualStack is enabled but this partition does not support DualStack", err.Error(); !strings.Contains(a, e) {
+ t.Errorf("expect %v error in %v", e, a)
+ }
+}
+
+// For region eu-isoe-west-1 with FIPS disabled and DualStack disabled
+func TestEndpointCase26(t *testing.T) {
var params = EndpointParameters{
+ Region: ptr.String("eu-isoe-west-1"),
UseFIPS: ptr.Bool(false),
UseDualStack: ptr.Bool(false),
- Endpoint: ptr.String("https://example.com"),
}
resolver := NewDefaultEndpointResolverV2()
@@ -880,12 +1077,25 @@ func TestEndpointCase23(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://example.com")
+ uri, _ := url.Parse("https://account.eu-isoe-west-1.cloud.adc-e.uk")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "eu-isoe-west-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -901,13 +1111,12 @@ func TestEndpointCase23(t *testing.T) {
}
}
-// For custom endpoint with fips enabled and dualstack disabled
-func TestEndpointCase24(t *testing.T) {
+// For region us-isof-south-1 with FIPS enabled and DualStack enabled
+func TestEndpointCase27(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("us-east-1"),
+ Region: ptr.String("us-isof-south-1"),
UseFIPS: ptr.Bool(true),
- UseDualStack: ptr.Bool(false),
- Endpoint: ptr.String("https://example.com"),
+ UseDualStack: ptr.Bool(true),
}
resolver := NewDefaultEndpointResolverV2()
@@ -917,18 +1126,67 @@ func TestEndpointCase24(t *testing.T) {
if err == nil {
t.Fatalf("expect error, got none")
}
- if e, a := "Invalid Configuration: FIPS and custom endpoint are not supported", err.Error(); !strings.Contains(a, e) {
+ if e, a := "FIPS and DualStack are enabled, but this partition does not support one or both", err.Error(); !strings.Contains(a, e) {
t.Errorf("expect %v error in %v", e, a)
}
}
-// For custom endpoint with fips disabled and dualstack enabled
-func TestEndpointCase25(t *testing.T) {
+// For region us-isof-south-1 with FIPS enabled and DualStack disabled
+func TestEndpointCase28(t *testing.T) {
var params = EndpointParameters{
- Region: ptr.String("us-east-1"),
+ Region: ptr.String("us-isof-south-1"),
+ UseFIPS: ptr.Bool(true),
+ UseDualStack: ptr.Bool(false),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://account-fips.us-isof-south-1.csp.hci.ic.gov")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-isof-south-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region us-isof-south-1 with FIPS disabled and DualStack enabled
+func TestEndpointCase29(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("us-isof-south-1"),
UseFIPS: ptr.Bool(false),
UseDualStack: ptr.Bool(true),
- Endpoint: ptr.String("https://example.com"),
}
resolver := NewDefaultEndpointResolverV2()
@@ -938,13 +1196,63 @@ func TestEndpointCase25(t *testing.T) {
if err == nil {
t.Fatalf("expect error, got none")
}
- if e, a := "Invalid Configuration: Dualstack and custom endpoint are not supported", err.Error(); !strings.Contains(a, e) {
+ if e, a := "DualStack is enabled but this partition does not support DualStack", err.Error(); !strings.Contains(a, e) {
t.Errorf("expect %v error in %v", e, a)
}
}
+// For region us-isof-south-1 with FIPS disabled and DualStack disabled
+func TestEndpointCase30(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("us-isof-south-1"),
+ UseFIPS: ptr.Bool(false),
+ UseDualStack: ptr.Bool(false),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://account.us-isof-south-1.csp.hci.ic.gov")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-isof-south-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
// Missing region
-func TestEndpointCase26(t *testing.T) {
+func TestEndpointCase31(t *testing.T) {
var params = EndpointParameters{}
resolver := NewDefaultEndpointResolverV2()
diff --git a/service/account/go_module_metadata.go b/service/account/go_module_metadata.go
index 3e484f6aaa4..a2e67459fac 100644
--- a/service/account/go_module_metadata.go
+++ b/service/account/go_module_metadata.go
@@ -3,4 +3,4 @@
package account
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.21.7"
+const goModuleVersion = "1.22.0"
diff --git a/service/amplify/CHANGELOG.md b/service/amplify/CHANGELOG.md
index d1a1b15be41..e901cca32de 100644
--- a/service/amplify/CHANGELOG.md
+++ b/service/amplify/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.28.0 (2024-12-18)
+
+* **Feature**: Added WAF Configuration to Amplify Apps
+
# v1.27.5 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/amplify/deserializers.go b/service/amplify/deserializers.go
index 8e60285c206..23e1862b2d2 100644
--- a/service/amplify/deserializers.go
+++ b/service/amplify/deserializers.go
@@ -6489,6 +6489,27 @@ func awsRestjson1_deserializeDocumentApp(v **types.App, value interface{}) error
}
}
+ case "wafConfiguration":
+ if err := awsRestjson1_deserializeDocumentWafConfiguration(&sv.WafConfiguration, value); err != nil {
+ return err
+ }
+
+ case "webhookCreateTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.WebhookCreateTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected webhookCreateTime to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
default:
_, _ = key, value
@@ -8755,6 +8776,64 @@ func awsRestjson1_deserializeDocumentUnauthorizedException(v **types.Unauthorize
return nil
}
+func awsRestjson1_deserializeDocumentWafConfiguration(v **types.WafConfiguration, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.WafConfiguration
+ if *v == nil {
+ sv = &types.WafConfiguration{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "statusReason":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected StatusReason to be of type string, got %T instead", value)
+ }
+ sv.StatusReason = ptr.String(jtv)
+ }
+
+ case "wafStatus":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected WafStatus to be of type string, got %T instead", value)
+ }
+ sv.WafStatus = types.WafStatus(jtv)
+ }
+
+ case "webAclArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected WebAclArn to be of type string, got %T instead", value)
+ }
+ sv.WebAclArn = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsRestjson1_deserializeDocumentWebhook(v **types.Webhook, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
diff --git a/service/amplify/go_module_metadata.go b/service/amplify/go_module_metadata.go
index f450adf7f04..8f42e801459 100644
--- a/service/amplify/go_module_metadata.go
+++ b/service/amplify/go_module_metadata.go
@@ -3,4 +3,4 @@
package amplify
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.27.5"
+const goModuleVersion = "1.28.0"
diff --git a/service/amplify/types/enums.go b/service/amplify/types/enums.go
index 9c46ba90273..bd51171bcad 100644
--- a/service/amplify/types/enums.go
+++ b/service/amplify/types/enums.go
@@ -79,6 +79,7 @@ type JobStatus string
// Enum values for JobStatus
const (
+ JobStatusCreated JobStatus = "CREATED"
JobStatusPending JobStatus = "PENDING"
JobStatusProvisioning JobStatus = "PROVISIONING"
JobStatusRunning JobStatus = "RUNNING"
@@ -94,6 +95,7 @@ const (
// The ordering of this slice is not guaranteed to be stable across updates.
func (JobStatus) Values() []JobStatus {
return []JobStatus{
+ "CREATED",
"PENDING",
"PROVISIONING",
"RUNNING",
@@ -241,3 +243,28 @@ func (UpdateStatus) Values() []UpdateStatus {
"UPDATE_FAILED",
}
}
+
+type WafStatus string
+
+// Enum values for WafStatus
+const (
+ WafStatusAssociating WafStatus = "ASSOCIATING"
+ WafStatusAssociationFailed WafStatus = "ASSOCIATION_FAILED"
+ WafStatusAssociationSuccess WafStatus = "ASSOCIATION_SUCCESS"
+ WafStatusDisassociating WafStatus = "DISASSOCIATING"
+ WafStatusDisassociationFailed WafStatus = "DISASSOCIATION_FAILED"
+)
+
+// Values returns all known values for WafStatus. Note that this can be expanded
+// in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (WafStatus) Values() []WafStatus {
+ return []WafStatus{
+ "ASSOCIATING",
+ "ASSOCIATION_FAILED",
+ "ASSOCIATION_SUCCESS",
+ "DISASSOCIATING",
+ "DISASSOCIATION_FAILED",
+ }
+}
diff --git a/service/amplify/types/types.go b/service/amplify/types/types.go
index 80b770e3984..9dc58a6e728 100644
--- a/service/amplify/types/types.go
+++ b/service/amplify/types/types.go
@@ -21,7 +21,7 @@ type App struct {
// This member is required.
AppId *string
- // Creates a date and time for the Amplify app.
+ // A timestamp of when Amplify created the application.
//
// This member is required.
CreateTime *time.Time
@@ -77,7 +77,7 @@ type App struct {
// This member is required.
Repository *string
- // Updates the date and time for the Amplify app.
+ // A timestamp of when Amplify updated the application.
//
// This member is required.
UpdateTime *time.Time
@@ -132,6 +132,14 @@ type App struct {
// The tag for the Amplify app.
Tags map[string]string
+ // Describes the Firewall configuration for the Amplify app. Firewall support
+ // enables you to protect your hosted applications with a direct integration with
+ // WAF.
+ WafConfiguration *WafConfiguration
+
+ // A timestamp of when Amplify created the webhook in your Git repository.
+ WebhookCreateTime *time.Time
+
noSmithyDocumentSerde
}
@@ -261,7 +269,7 @@ type Branch struct {
// This member is required.
BranchName *string
- // The creation date and time for a branch that is part of an Amplify app.
+ // A timestamp of when Amplify created the branch.
//
// This member is required.
CreateTime *time.Time
@@ -326,7 +334,7 @@ type Branch struct {
// This member is required.
Ttl *string
- // The last updated date and time for a branch that is part of an Amplify app.
+ // A timestamp for the last updated time for a branch.
//
// This member is required.
UpdateTime *time.Time
@@ -614,7 +622,7 @@ type JobSummary struct {
// This member is required.
CommitMessage *string
- // The commit date and time for the job.
+ // The commit date and time for the job.
//
// This member is required.
CommitTime *time.Time
@@ -769,6 +777,27 @@ type SubDomainSetting struct {
noSmithyDocumentSerde
}
+// Describes the Firewall configuration for a hosted Amplify application. Firewall
+// support enables you to protect your web applications with a direct integration
+// with WAF. For more information about using WAF protections for an Amplify
+// application, see [Firewall support for hosted sites]in the Amplify User Guide.
+//
+// [Firewall support for hosted sites]: https://docs.aws.amazon.com/amplify/latest/userguide/WAF-integration.html
+type WafConfiguration struct {
+
+ // The reason for the current status of the Firewall configuration.
+ StatusReason *string
+
+ // The status of the process to associate or disassociate a web ACL to an Amplify
+ // app.
+ WafStatus WafStatus
+
+ // The Amazon Resource Name (ARN) for the web ACL associated with an Amplify app.
+ WebAclArn *string
+
+ noSmithyDocumentSerde
+}
+
// Describes a webhook that connects repository events to an Amplify app.
type Webhook struct {
@@ -777,7 +806,7 @@ type Webhook struct {
// This member is required.
BranchName *string
- // The create date and time for a webhook.
+ // A timestamp of when Amplify created the webhook in your Git repository.
//
// This member is required.
CreateTime *time.Time
@@ -787,7 +816,7 @@ type Webhook struct {
// This member is required.
Description *string
- // Updates the date and time for a webhook.
+ // A timestamp of when Amplify updated the webhook in your Git repository.
//
// This member is required.
UpdateTime *time.Time
diff --git a/service/athena/CHANGELOG.md b/service/athena/CHANGELOG.md
index 5f1bd41d597..58a30dd570e 100644
--- a/service/athena/CHANGELOG.md
+++ b/service/athena/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.49.1 (2024-12-17)
+
+* No change notes available for this release.
+
# v1.49.0 (2024-12-03.2)
* **Feature**: Add FEDERATED type to CreateDataCatalog. This creates Athena Data Catalog, AWS Lambda connector, and AWS Glue connection. Create/DeleteDataCatalog returns DataCatalog. Add Status, ConnectionType, and Error to DataCatalog and DataCatalogSummary. Add DeleteCatalogOnly to delete Athena Catalog only.
diff --git a/service/athena/go_module_metadata.go b/service/athena/go_module_metadata.go
index 65ff6e13375..e3dfc96603c 100644
--- a/service/athena/go_module_metadata.go
+++ b/service/athena/go_module_metadata.go
@@ -3,4 +3,4 @@
package athena
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.49.0"
+const goModuleVersion = "1.49.1"
diff --git a/service/athena/internal/endpoints/endpoints.go b/service/athena/internal/endpoints/endpoints.go
index a320eb2ed58..b64e0a202d3 100644
--- a/service/athena/internal/endpoints/endpoints.go
+++ b/service/athena/internal/endpoints/endpoints.go
@@ -238,6 +238,15 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "athena.ap-southeast-4.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-5",
+ }: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-5",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "athena.ap-southeast-5.api.aws",
+ },
endpoints.EndpointKey{
Region: "ca-central-1",
}: endpoints.Endpoint{},
diff --git a/service/backup/CHANGELOG.md b/service/backup/CHANGELOG.md
index 6038b81e303..4b750b6a5c1 100644
--- a/service/backup/CHANGELOG.md
+++ b/service/backup/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.40.0 (2024-12-17)
+
+* **Feature**: Add Support for Backup Indexing
+
# v1.39.8 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/backup/api_op_DescribeRecoveryPoint.go b/service/backup/api_op_DescribeRecoveryPoint.go
index 499d3c62ed5..948b304fee1 100644
--- a/service/backup/api_op_DescribeRecoveryPoint.go
+++ b/service/backup/api_op_DescribeRecoveryPoint.go
@@ -102,6 +102,19 @@ type DescribeRecoveryPointOutput struct {
// example, arn:aws:iam::123456789012:role/S3Access .
IamRoleArn *string
+ // This is the current status for the backup index associated with the specified
+ // recovery point.
+ //
+ // Statuses are: PENDING | ACTIVE | FAILED | DELETING
+ //
+ // A recovery point with an index that has the status of ACTIVE can be included in
+ // a search.
+ IndexStatus types.IndexStatus
+
+ // A string in the form of a detailed message explaining the status of a backup
+ // index associated with the recovery point.
+ IndexStatusMessage *string
+
// A Boolean value that is returned as TRUE if the specified recovery point is
// encrypted, or FALSE if the recovery point is not encrypted.
IsEncrypted bool
diff --git a/service/backup/api_op_GetRecoveryPointIndexDetails.go b/service/backup/api_op_GetRecoveryPointIndexDetails.go
new file mode 100644
index 00000000000..c95cb690134
--- /dev/null
+++ b/service/backup/api_op_GetRecoveryPointIndexDetails.go
@@ -0,0 +1,216 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backup
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/backup/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+ "time"
+)
+
+// This operation returns the metadata and details specific to the backup index
+// associated with the specified recovery point.
+func (c *Client) GetRecoveryPointIndexDetails(ctx context.Context, params *GetRecoveryPointIndexDetailsInput, optFns ...func(*Options)) (*GetRecoveryPointIndexDetailsOutput, error) {
+ if params == nil {
+ params = &GetRecoveryPointIndexDetailsInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "GetRecoveryPointIndexDetails", params, optFns, c.addOperationGetRecoveryPointIndexDetailsMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*GetRecoveryPointIndexDetailsOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type GetRecoveryPointIndexDetailsInput struct {
+
+ // The name of a logical container where backups are stored. Backup vaults are
+ // identified by names that are unique to the account used to create them and the
+ // Region where they are created.
+ //
+ // Accepted characters include lowercase letters, numbers, and hyphens.
+ //
+ // This member is required.
+ BackupVaultName *string
+
+ // An ARN that uniquely identifies a recovery point; for example,
+ // arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
+ // .
+ //
+ // This member is required.
+ RecoveryPointArn *string
+
+ noSmithyDocumentSerde
+}
+
+type GetRecoveryPointIndexDetailsOutput struct {
+
+ // An ARN that uniquely identifies the backup vault where the recovery point index
+ // is stored.
+ //
+ // For example, arn:aws:backup:us-east-1:123456789012:backup-vault:aBackupVault .
+ BackupVaultArn *string
+
+ // The date and time that a backup index finished creation, in Unix format and
+ // Coordinated Universal Time (UTC). The value of CreationDate is accurate to
+ // milliseconds. For example, the value 1516925490.087 represents Friday, January
+ // 26, 2018 12:11:30.087 AM.
+ IndexCompletionDate *time.Time
+
+ // The date and time that a backup index was created, in Unix format and
+ // Coordinated Universal Time (UTC). The value of CreationDate is accurate to
+ // milliseconds. For example, the value 1516925490.087 represents Friday, January
+ // 26, 2018 12:11:30.087 AM.
+ IndexCreationDate *time.Time
+
+ // The date and time that a backup index was deleted, in Unix format and
+ // Coordinated Universal Time (UTC). The value of CreationDate is accurate to
+ // milliseconds. For example, the value 1516925490.087 represents Friday, January
+ // 26, 2018 12:11:30.087 AM.
+ IndexDeletionDate *time.Time
+
+ // This is the current status for the backup index associated with the specified
+ // recovery point.
+ //
+ // Statuses are: PENDING | ACTIVE | FAILED | DELETING
+ //
+ // A recovery point with an index that has the status of ACTIVE can be included in
+ // a search.
+ IndexStatus types.IndexStatus
+
+ // A detailed message explaining the status of a backup index associated with the
+ // recovery point.
+ IndexStatusMessage *string
+
+ // An ARN that uniquely identifies a recovery point; for example,
+ // arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
+ // .
+ RecoveryPointArn *string
+
+ // A string of the Amazon Resource Name (ARN) that uniquely identifies the source
+ // resource.
+ SourceResourceArn *string
+
+ // Count of items within the backup index associated with the recovery point.
+ TotalItemsIndexed *int64
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationGetRecoveryPointIndexDetailsMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpGetRecoveryPointIndexDetails{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpGetRecoveryPointIndexDetails{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "GetRecoveryPointIndexDetails"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpGetRecoveryPointIndexDetailsValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opGetRecoveryPointIndexDetails(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opGetRecoveryPointIndexDetails(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "GetRecoveryPointIndexDetails",
+ }
+}
diff --git a/service/backup/api_op_ListIndexedRecoveryPoints.go b/service/backup/api_op_ListIndexedRecoveryPoints.go
new file mode 100644
index 00000000000..92e92a6e537
--- /dev/null
+++ b/service/backup/api_op_ListIndexedRecoveryPoints.go
@@ -0,0 +1,295 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backup
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/backup/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+ "time"
+)
+
+// This operation returns a list of recovery points that have an associated index,
+// belonging to the specified account.
+//
+// Optional parameters you can include are: MaxResults; NextToken;
+// SourceResourceArns; CreatedBefore; CreatedAfter; and ResourceType.
+func (c *Client) ListIndexedRecoveryPoints(ctx context.Context, params *ListIndexedRecoveryPointsInput, optFns ...func(*Options)) (*ListIndexedRecoveryPointsOutput, error) {
+ if params == nil {
+ params = &ListIndexedRecoveryPointsInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "ListIndexedRecoveryPoints", params, optFns, c.addOperationListIndexedRecoveryPointsMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*ListIndexedRecoveryPointsOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type ListIndexedRecoveryPointsInput struct {
+
+ // Returns only indexed recovery points that were created after the specified date.
+ CreatedAfter *time.Time
+
+ // Returns only indexed recovery points that were created before the specified
+ // date.
+ CreatedBefore *time.Time
+
+ // Include this parameter to filter the returned list by the indicated statuses.
+ //
+ // Accepted values: PENDING | ACTIVE | FAILED | DELETING
+ //
+ // A recovery point with an index that has the status of ACTIVE can be included in
+ // a search.
+ IndexStatus types.IndexStatus
+
+ // The maximum number of resource list items to be returned.
+ MaxResults *int32
+
+ // The next item following a partial list of returned recovery points.
+ //
+ // For example, if a request is made to return MaxResults number of indexed
+ // recovery points, NextToken allows you to return more items in your list
+ // starting at the location pointed to by the next token.
+ NextToken *string
+
+ // Returns a list of indexed recovery points for the specified resource type(s).
+ //
+ // Accepted values include:
+ //
+ // - EBS for Amazon Elastic Block Store
+ //
+ // - S3 for Amazon Simple Storage Service (Amazon S3)
+ ResourceType *string
+
+ // A string of the Amazon Resource Name (ARN) that uniquely identifies the source
+ // resource.
+ SourceResourceArn *string
+
+ noSmithyDocumentSerde
+}
+
+type ListIndexedRecoveryPointsOutput struct {
+
+ // This is a list of recovery points that have an associated index, belonging to
+ // the specified account.
+ IndexedRecoveryPoints []types.IndexedRecoveryPoint
+
+ // The next item following a partial list of returned recovery points.
+ //
+ // For example, if a request is made to return MaxResults number of indexed
+ // recovery points, NextToken allows you to return more items in your list
+ // starting at the location pointed to by the next token.
+ NextToken *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationListIndexedRecoveryPointsMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpListIndexedRecoveryPoints{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpListIndexedRecoveryPoints{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "ListIndexedRecoveryPoints"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opListIndexedRecoveryPoints(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+// ListIndexedRecoveryPointsPaginatorOptions is the paginator options for
+// ListIndexedRecoveryPoints
+type ListIndexedRecoveryPointsPaginatorOptions struct {
+ // The maximum number of resource list items to be returned.
+ Limit int32
+
+ // Set to true if pagination should stop if the service returns a pagination token
+ // that matches the most recent token provided to the service.
+ StopOnDuplicateToken bool
+}
+
+// ListIndexedRecoveryPointsPaginator is a paginator for ListIndexedRecoveryPoints
+type ListIndexedRecoveryPointsPaginator struct {
+ options ListIndexedRecoveryPointsPaginatorOptions
+ client ListIndexedRecoveryPointsAPIClient
+ params *ListIndexedRecoveryPointsInput
+ nextToken *string
+ firstPage bool
+}
+
+// NewListIndexedRecoveryPointsPaginator returns a new
+// ListIndexedRecoveryPointsPaginator
+func NewListIndexedRecoveryPointsPaginator(client ListIndexedRecoveryPointsAPIClient, params *ListIndexedRecoveryPointsInput, optFns ...func(*ListIndexedRecoveryPointsPaginatorOptions)) *ListIndexedRecoveryPointsPaginator {
+ if params == nil {
+ params = &ListIndexedRecoveryPointsInput{}
+ }
+
+ options := ListIndexedRecoveryPointsPaginatorOptions{}
+ if params.MaxResults != nil {
+ options.Limit = *params.MaxResults
+ }
+
+ for _, fn := range optFns {
+ fn(&options)
+ }
+
+ return &ListIndexedRecoveryPointsPaginator{
+ options: options,
+ client: client,
+ params: params,
+ firstPage: true,
+ nextToken: params.NextToken,
+ }
+}
+
+// HasMorePages returns a boolean indicating whether more pages are available
+func (p *ListIndexedRecoveryPointsPaginator) HasMorePages() bool {
+ return p.firstPage || (p.nextToken != nil && len(*p.nextToken) != 0)
+}
+
+// NextPage retrieves the next ListIndexedRecoveryPoints page.
+func (p *ListIndexedRecoveryPointsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListIndexedRecoveryPointsOutput, error) {
+ if !p.HasMorePages() {
+ return nil, fmt.Errorf("no more pages available")
+ }
+
+ params := *p.params
+ params.NextToken = p.nextToken
+
+ var limit *int32
+ if p.options.Limit > 0 {
+ limit = &p.options.Limit
+ }
+ params.MaxResults = limit
+
+ optFns = append([]func(*Options){
+ addIsPaginatorUserAgent,
+ }, optFns...)
+ result, err := p.client.ListIndexedRecoveryPoints(ctx, ¶ms, optFns...)
+ if err != nil {
+ return nil, err
+ }
+ p.firstPage = false
+
+ prevToken := p.nextToken
+ p.nextToken = result.NextToken
+
+ if p.options.StopOnDuplicateToken &&
+ prevToken != nil &&
+ p.nextToken != nil &&
+ *prevToken == *p.nextToken {
+ p.nextToken = nil
+ }
+
+ return result, nil
+}
+
+// ListIndexedRecoveryPointsAPIClient is a client that implements the
+// ListIndexedRecoveryPoints operation.
+type ListIndexedRecoveryPointsAPIClient interface {
+ ListIndexedRecoveryPoints(context.Context, *ListIndexedRecoveryPointsInput, ...func(*Options)) (*ListIndexedRecoveryPointsOutput, error)
+}
+
+var _ ListIndexedRecoveryPointsAPIClient = (*Client)(nil)
+
+func newServiceMetadataMiddleware_opListIndexedRecoveryPoints(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "ListIndexedRecoveryPoints",
+ }
+}
diff --git a/service/backup/api_op_StartBackupJob.go b/service/backup/api_op_StartBackupJob.go
index 9d819691b88..066b3ebe779 100644
--- a/service/backup/api_op_StartBackupJob.go
+++ b/service/backup/api_op_StartBackupJob.go
@@ -71,6 +71,23 @@ type StartBackupJobInput struct {
// idempotency token results in a success message with no action taken.
IdempotencyToken *string
+ // Include this parameter to enable index creation if your backup job has a
+ // resource type that supports backup indexes.
+ //
+ // Resource types that support backup indexes include:
+ //
+ // - EBS for Amazon Elastic Block Store
+ //
+ // - S3 for Amazon Simple Storage Service (Amazon S3)
+ //
+ // Index can have 1 of 2 possible values, either ENABLED or DISABLED .
+ //
+ // To create a backup index for an eligible ACTIVE recovery point that does not
+ // yet have a backup index, set value to ENABLED .
+ //
+ // To delete a backup index, set value to DISABLED .
+ Index types.Index
+
// The lifecycle defines when a protected resource is transitioned to cold storage
// and when it expires. Backup will transition and expire backups automatically
// according to the lifecycle that you define.
diff --git a/service/backup/api_op_UpdateRecoveryPointIndexSettings.go b/service/backup/api_op_UpdateRecoveryPointIndexSettings.go
new file mode 100644
index 00000000000..d47c830c7f2
--- /dev/null
+++ b/service/backup/api_op_UpdateRecoveryPointIndexSettings.go
@@ -0,0 +1,209 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backup
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/backup/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// This operation updates the settings of a recovery point index.
+//
+// Required: BackupVaultName, RecoveryPointArn, and IAMRoleArn
+func (c *Client) UpdateRecoveryPointIndexSettings(ctx context.Context, params *UpdateRecoveryPointIndexSettingsInput, optFns ...func(*Options)) (*UpdateRecoveryPointIndexSettingsOutput, error) {
+ if params == nil {
+ params = &UpdateRecoveryPointIndexSettingsInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "UpdateRecoveryPointIndexSettings", params, optFns, c.addOperationUpdateRecoveryPointIndexSettingsMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*UpdateRecoveryPointIndexSettingsOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type UpdateRecoveryPointIndexSettingsInput struct {
+
+ // The name of a logical container where backups are stored. Backup vaults are
+ // identified by names that are unique to the account used to create them and the
+ // Region where they are created.
+ //
+ // Accepted characters include lowercase letters, numbers, and hyphens.
+ //
+ // This member is required.
+ BackupVaultName *string
+
+ // Index can have 1 of 2 possible values, either ENABLED or DISABLED .
+ //
+ // To create a backup index for an eligible ACTIVE recovery point that does not
+ // yet have a backup index, set value to ENABLED .
+ //
+ // To delete a backup index, set value to DISABLED .
+ //
+ // This member is required.
+ Index types.Index
+
+ // An ARN that uniquely identifies a recovery point; for example,
+ // arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
+ // .
+ //
+ // This member is required.
+ RecoveryPointArn *string
+
+ // This specifies the IAM role ARN used for this operation.
+ //
+ // For example, arn:aws:iam::123456789012:role/S3Access
+ IamRoleArn *string
+
+ noSmithyDocumentSerde
+}
+
+type UpdateRecoveryPointIndexSettingsOutput struct {
+
+ // The name of a logical container where backups are stored. Backup vaults are
+ // identified by names that are unique to the account used to create them and the
+ // Region where they are created.
+ BackupVaultName *string
+
+ // Index can have 1 of 2 possible values, either ENABLED or DISABLED .
+ //
+ // A value of ENABLED means a backup index for an eligible ACTIVE recovery point
+ // has been created.
+ //
+ // A value of DISABLED means a backup index was deleted.
+ Index types.Index
+
+ // This is the current status for the backup index associated with the specified
+ // recovery point.
+ //
+ // Statuses are: PENDING | ACTIVE | FAILED | DELETING
+ //
+ // A recovery point with an index that has the status of ACTIVE can be included in
+ // a search.
+ IndexStatus types.IndexStatus
+
+ // An ARN that uniquely identifies a recovery point; for example,
+ // arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
+ // .
+ RecoveryPointArn *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationUpdateRecoveryPointIndexSettingsMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpUpdateRecoveryPointIndexSettings{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpUpdateRecoveryPointIndexSettings{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "UpdateRecoveryPointIndexSettings"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpUpdateRecoveryPointIndexSettingsValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opUpdateRecoveryPointIndexSettings(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opUpdateRecoveryPointIndexSettings(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "UpdateRecoveryPointIndexSettings",
+ }
+}
diff --git a/service/backup/deserializers.go b/service/backup/deserializers.go
index 6c9bef6394b..502f095b107 100644
--- a/service/backup/deserializers.go
+++ b/service/backup/deserializers.go
@@ -4958,6 +4958,24 @@ func awsRestjson1_deserializeOpDocumentDescribeRecoveryPointOutput(v **DescribeR
sv.IamRoleArn = ptr.String(jtv)
}
+ case "IndexStatus":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected IndexStatus to be of type string, got %T instead", value)
+ }
+ sv.IndexStatus = types.IndexStatus(jtv)
+ }
+
+ case "IndexStatusMessage":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected string to be of type string, got %T instead", value)
+ }
+ sv.IndexStatusMessage = ptr.String(jtv)
+ }
+
case "IsEncrypted":
if value != nil {
jtv, ok := value.(bool)
@@ -7720,6 +7738,268 @@ func awsRestjson1_deserializeOpDocumentGetLegalHoldOutput(v **GetLegalHoldOutput
return nil
}
+type awsRestjson1_deserializeOpGetRecoveryPointIndexDetails struct {
+}
+
+func (*awsRestjson1_deserializeOpGetRecoveryPointIndexDetails) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpGetRecoveryPointIndexDetails) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorGetRecoveryPointIndexDetails(response, &metadata)
+ }
+ output := &GetRecoveryPointIndexDetailsOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentGetRecoveryPointIndexDetailsOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorGetRecoveryPointIndexDetails(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("InvalidParameterValueException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidParameterValueException(response, errorBody)
+
+ case strings.EqualFold("MissingParameterValueException", errorCode):
+ return awsRestjson1_deserializeErrorMissingParameterValueException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ServiceUnavailableException", errorCode):
+ return awsRestjson1_deserializeErrorServiceUnavailableException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentGetRecoveryPointIndexDetailsOutput(v **GetRecoveryPointIndexDetailsOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *GetRecoveryPointIndexDetailsOutput
+ if *v == nil {
+ sv = &GetRecoveryPointIndexDetailsOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "BackupVaultArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
+ }
+ sv.BackupVaultArn = ptr.String(jtv)
+ }
+
+ case "IndexCompletionDate":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.IndexCompletionDate = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "IndexCreationDate":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.IndexCreationDate = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "IndexDeletionDate":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.IndexDeletionDate = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "IndexStatus":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected IndexStatus to be of type string, got %T instead", value)
+ }
+ sv.IndexStatus = types.IndexStatus(jtv)
+ }
+
+ case "IndexStatusMessage":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected string to be of type string, got %T instead", value)
+ }
+ sv.IndexStatusMessage = ptr.String(jtv)
+ }
+
+ case "RecoveryPointArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
+ }
+ sv.RecoveryPointArn = ptr.String(jtv)
+ }
+
+ case "SourceResourceArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
+ }
+ sv.SourceResourceArn = ptr.String(jtv)
+ }
+
+ case "TotalItemsIndexed":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected Long to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.TotalItemsIndexed = ptr.Int64(i64)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
type awsRestjson1_deserializeOpGetRecoveryPointRestoreMetadata struct {
}
@@ -10389,14 +10669,14 @@ func awsRestjson1_deserializeOpDocumentListFrameworksOutput(v **ListFrameworksOu
return nil
}
-type awsRestjson1_deserializeOpListLegalHolds struct {
+type awsRestjson1_deserializeOpListIndexedRecoveryPoints struct {
}
-func (*awsRestjson1_deserializeOpListLegalHolds) ID() string {
+func (*awsRestjson1_deserializeOpListIndexedRecoveryPoints) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpListLegalHolds) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpListIndexedRecoveryPoints) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -10414,9 +10694,9 @@ func (m *awsRestjson1_deserializeOpListLegalHolds) HandleDeserialize(ctx context
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorListLegalHolds(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorListIndexedRecoveryPoints(response, &metadata)
}
- output := &ListLegalHoldsOutput{}
+ output := &ListIndexedRecoveryPointsOutput{}
out.Result = output
var buff [1024]byte
@@ -10437,7 +10717,174 @@ func (m *awsRestjson1_deserializeOpListLegalHolds) HandleDeserialize(ctx context
return out, metadata, err
}
- err = awsRestjson1_deserializeOpDocumentListLegalHoldsOutput(&output, shape)
+ err = awsRestjson1_deserializeOpDocumentListIndexedRecoveryPointsOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorListIndexedRecoveryPoints(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("InvalidParameterValueException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidParameterValueException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ServiceUnavailableException", errorCode):
+ return awsRestjson1_deserializeErrorServiceUnavailableException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentListIndexedRecoveryPointsOutput(v **ListIndexedRecoveryPointsOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *ListIndexedRecoveryPointsOutput
+ if *v == nil {
+ sv = &ListIndexedRecoveryPointsOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "IndexedRecoveryPoints":
+ if err := awsRestjson1_deserializeDocumentIndexedRecoveryPointList(&sv.IndexedRecoveryPoints, value); err != nil {
+ return err
+ }
+
+ case "NextToken":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected string to be of type string, got %T instead", value)
+ }
+ sv.NextToken = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+type awsRestjson1_deserializeOpListLegalHolds struct {
+}
+
+func (*awsRestjson1_deserializeOpListLegalHolds) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpListLegalHolds) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorListLegalHolds(response, &metadata)
+ }
+ output := &ListLegalHoldsOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentListLegalHoldsOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -14693,20 +15140,215 @@ func awsRestjson1_deserializeOpErrorUpdateGlobalSettings(response *smithyhttp.Re
case strings.EqualFold("InvalidRequestException", errorCode):
return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
- case strings.EqualFold("MissingParameterValueException", errorCode):
- return awsRestjson1_deserializeErrorMissingParameterValueException(response, errorBody)
+ case strings.EqualFold("MissingParameterValueException", errorCode):
+ return awsRestjson1_deserializeErrorMissingParameterValueException(response, errorBody)
+
+ case strings.EqualFold("ServiceUnavailableException", errorCode):
+ return awsRestjson1_deserializeErrorServiceUnavailableException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+type awsRestjson1_deserializeOpUpdateRecoveryPointIndexSettings struct {
+}
+
+func (*awsRestjson1_deserializeOpUpdateRecoveryPointIndexSettings) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpUpdateRecoveryPointIndexSettings) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateRecoveryPointIndexSettings(response, &metadata)
+ }
+ output := &UpdateRecoveryPointIndexSettingsOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentUpdateRecoveryPointIndexSettingsOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorUpdateRecoveryPointIndexSettings(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("InvalidParameterValueException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidParameterValueException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+ case strings.EqualFold("MissingParameterValueException", errorCode):
+ return awsRestjson1_deserializeErrorMissingParameterValueException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ServiceUnavailableException", errorCode):
+ return awsRestjson1_deserializeErrorServiceUnavailableException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentUpdateRecoveryPointIndexSettingsOutput(v **UpdateRecoveryPointIndexSettingsOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *UpdateRecoveryPointIndexSettingsOutput
+ if *v == nil {
+ sv = &UpdateRecoveryPointIndexSettingsOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "BackupVaultName":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected BackupVaultName to be of type string, got %T instead", value)
+ }
+ sv.BackupVaultName = ptr.String(jtv)
+ }
+
+ case "Index":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected Index to be of type string, got %T instead", value)
+ }
+ sv.Index = types.Index(jtv)
+ }
+
+ case "IndexStatus":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected IndexStatus to be of type string, got %T instead", value)
+ }
+ sv.IndexStatus = types.IndexStatus(jtv)
+ }
+
+ case "RecoveryPointArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
+ }
+ sv.RecoveryPointArn = ptr.String(jtv)
+ }
- case strings.EqualFold("ServiceUnavailableException", errorCode):
- return awsRestjson1_deserializeErrorServiceUnavailableException(response, errorBody)
+ default:
+ _, _ = key, value
- default:
- genericError := &smithy.GenericAPIError{
- Code: errorCode,
- Message: errorMessage,
}
- return genericError
-
}
+ *v = sv
+ return nil
}
type awsRestjson1_deserializeOpUpdateRecoveryPointLifecycle struct {
@@ -17075,6 +17717,11 @@ func awsRestjson1_deserializeDocumentBackupRule(v **types.BackupRule, value inte
sv.EnableContinuousBackup = ptr.Bool(jtv)
}
+ case "IndexActions":
+ if err := awsRestjson1_deserializeDocumentIndexActions(&sv.IndexActions, value); err != nil {
+ return err
+ }
+
case "Lifecycle":
if err := awsRestjson1_deserializeDocumentLifecycle(&sv.Lifecycle, value); err != nil {
return err
@@ -19070,6 +19717,236 @@ func awsRestjson1_deserializeDocumentGlobalSettings(v *map[string]string, value
return nil
}
+func awsRestjson1_deserializeDocumentIndexAction(v **types.IndexAction, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.IndexAction
+ if *v == nil {
+ sv = &types.IndexAction{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "ResourceTypes":
+ if err := awsRestjson1_deserializeDocumentResourceTypes(&sv.ResourceTypes, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentIndexActions(v *[]types.IndexAction, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.IndexAction
+ if *v == nil {
+ cv = []types.IndexAction{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.IndexAction
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentIndexAction(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentIndexedRecoveryPoint(v **types.IndexedRecoveryPoint, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.IndexedRecoveryPoint
+ if *v == nil {
+ sv = &types.IndexedRecoveryPoint{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "BackupCreationDate":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.BackupCreationDate = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "BackupVaultArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
+ }
+ sv.BackupVaultArn = ptr.String(jtv)
+ }
+
+ case "IamRoleArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
+ }
+ sv.IamRoleArn = ptr.String(jtv)
+ }
+
+ case "IndexCreationDate":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.IndexCreationDate = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "IndexStatus":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected IndexStatus to be of type string, got %T instead", value)
+ }
+ sv.IndexStatus = types.IndexStatus(jtv)
+ }
+
+ case "IndexStatusMessage":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected string to be of type string, got %T instead", value)
+ }
+ sv.IndexStatusMessage = ptr.String(jtv)
+ }
+
+ case "RecoveryPointArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
+ }
+ sv.RecoveryPointArn = ptr.String(jtv)
+ }
+
+ case "ResourceType":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ResourceType to be of type string, got %T instead", value)
+ }
+ sv.ResourceType = ptr.String(jtv)
+ }
+
+ case "SourceResourceArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
+ }
+ sv.SourceResourceArn = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentIndexedRecoveryPointList(v *[]types.IndexedRecoveryPoint, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.IndexedRecoveryPoint
+ if *v == nil {
+ cv = []types.IndexedRecoveryPoint{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.IndexedRecoveryPoint
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentIndexedRecoveryPoint(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
func awsRestjson1_deserializeDocumentInvalidParameterValueException(v **types.InvalidParameterValueException, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -20055,6 +20932,24 @@ func awsRestjson1_deserializeDocumentRecoveryPointByBackupVault(v **types.Recove
sv.IamRoleArn = ptr.String(jtv)
}
+ case "IndexStatus":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected IndexStatus to be of type string, got %T instead", value)
+ }
+ sv.IndexStatus = types.IndexStatus(jtv)
+ }
+
+ case "IndexStatusMessage":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected string to be of type string, got %T instead", value)
+ }
+ sv.IndexStatusMessage = ptr.String(jtv)
+ }
+
case "IsEncrypted":
if value != nil {
jtv, ok := value.(bool)
@@ -20287,6 +21182,24 @@ func awsRestjson1_deserializeDocumentRecoveryPointByResource(v **types.RecoveryP
sv.EncryptionKeyArn = ptr.String(jtv)
}
+ case "IndexStatus":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected IndexStatus to be of type string, got %T instead", value)
+ }
+ sv.IndexStatus = types.IndexStatus(jtv)
+ }
+
+ case "IndexStatusMessage":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected string to be of type string, got %T instead", value)
+ }
+ sv.IndexStatusMessage = ptr.String(jtv)
+ }
+
case "IsParent":
if value != nil {
jtv, ok := value.(bool)
diff --git a/service/backup/generated.json b/service/backup/generated.json
index 0ac24e71a01..bbf99de4d2a 100644
--- a/service/backup/generated.json
+++ b/service/backup/generated.json
@@ -50,6 +50,7 @@
"api_op_GetBackupVaultAccessPolicy.go",
"api_op_GetBackupVaultNotifications.go",
"api_op_GetLegalHold.go",
+ "api_op_GetRecoveryPointIndexDetails.go",
"api_op_GetRecoveryPointRestoreMetadata.go",
"api_op_GetRestoreJobMetadata.go",
"api_op_GetRestoreTestingInferredMetadata.go",
@@ -66,6 +67,7 @@
"api_op_ListCopyJobSummaries.go",
"api_op_ListCopyJobs.go",
"api_op_ListFrameworks.go",
+ "api_op_ListIndexedRecoveryPoints.go",
"api_op_ListLegalHolds.go",
"api_op_ListProtectedResources.go",
"api_op_ListProtectedResourcesByBackupVault.go",
@@ -94,6 +96,7 @@
"api_op_UpdateBackupPlan.go",
"api_op_UpdateFramework.go",
"api_op_UpdateGlobalSettings.go",
+ "api_op_UpdateRecoveryPointIndexSettings.go",
"api_op_UpdateRecoveryPointLifecycle.go",
"api_op_UpdateRegionSettings.go",
"api_op_UpdateReportPlan.go",
diff --git a/service/backup/go_module_metadata.go b/service/backup/go_module_metadata.go
index 2f737ab624b..81226ee1b83 100644
--- a/service/backup/go_module_metadata.go
+++ b/service/backup/go_module_metadata.go
@@ -3,4 +3,4 @@
package backup
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.39.8"
+const goModuleVersion = "1.40.0"
diff --git a/service/backup/serializers.go b/service/backup/serializers.go
index d0247647690..1956434da85 100644
--- a/service/backup/serializers.go
+++ b/service/backup/serializers.go
@@ -3357,6 +3357,86 @@ func awsRestjson1_serializeOpHttpBindingsGetLegalHoldInput(v *GetLegalHoldInput,
return nil
}
+type awsRestjson1_serializeOpGetRecoveryPointIndexDetails struct {
+}
+
+func (*awsRestjson1_serializeOpGetRecoveryPointIndexDetails) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpGetRecoveryPointIndexDetails) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*GetRecoveryPointIndexDetailsInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/backup-vaults/{BackupVaultName}/recovery-points/{RecoveryPointArn}/index")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "GET"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsGetRecoveryPointIndexDetailsInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsGetRecoveryPointIndexDetailsInput(v *GetRecoveryPointIndexDetailsInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.BackupVaultName == nil || len(*v.BackupVaultName) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member BackupVaultName must not be empty")}
+ }
+ if v.BackupVaultName != nil {
+ if err := encoder.SetURI("BackupVaultName").String(*v.BackupVaultName); err != nil {
+ return err
+ }
+ }
+
+ if v.RecoveryPointArn == nil || len(*v.RecoveryPointArn) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member RecoveryPointArn must not be empty")}
+ }
+ if v.RecoveryPointArn != nil {
+ if err := encoder.SetURI("RecoveryPointArn").String(*v.RecoveryPointArn); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
type awsRestjson1_serializeOpGetRecoveryPointRestoreMetadata struct {
}
@@ -4653,6 +4733,96 @@ func awsRestjson1_serializeOpHttpBindingsListFrameworksInput(v *ListFrameworksIn
return nil
}
+type awsRestjson1_serializeOpListIndexedRecoveryPoints struct {
+}
+
+func (*awsRestjson1_serializeOpListIndexedRecoveryPoints) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpListIndexedRecoveryPoints) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*ListIndexedRecoveryPointsInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/indexes/recovery-point")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "GET"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsListIndexedRecoveryPointsInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsListIndexedRecoveryPointsInput(v *ListIndexedRecoveryPointsInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.CreatedAfter != nil {
+ encoder.SetQuery("createdAfter").String(smithytime.FormatDateTime(*v.CreatedAfter))
+ }
+
+ if v.CreatedBefore != nil {
+ encoder.SetQuery("createdBefore").String(smithytime.FormatDateTime(*v.CreatedBefore))
+ }
+
+ if len(v.IndexStatus) > 0 {
+ encoder.SetQuery("indexStatus").String(string(v.IndexStatus))
+ }
+
+ if v.MaxResults != nil {
+ encoder.SetQuery("maxResults").Integer(*v.MaxResults)
+ }
+
+ if v.NextToken != nil {
+ encoder.SetQuery("nextToken").String(*v.NextToken)
+ }
+
+ if v.ResourceType != nil {
+ encoder.SetQuery("resourceType").String(*v.ResourceType)
+ }
+
+ if v.SourceResourceArn != nil {
+ encoder.SetQuery("sourceResourceArn").String(*v.SourceResourceArn)
+ }
+
+ return nil
+}
+
type awsRestjson1_serializeOpListLegalHolds struct {
}
@@ -6306,6 +6476,11 @@ func awsRestjson1_serializeOpDocumentStartBackupJobInput(v *StartBackupJobInput,
ok.String(*v.IdempotencyToken)
}
+ if len(v.Index) > 0 {
+ ok := object.Key("Index")
+ ok.String(string(v.Index))
+ }
+
if v.Lifecycle != nil {
ok := object.Key("Lifecycle")
if err := awsRestjson1_serializeDocumentLifecycle(v.Lifecycle, ok); err != nil {
@@ -7191,6 +7366,114 @@ func awsRestjson1_serializeOpDocumentUpdateGlobalSettingsInput(v *UpdateGlobalSe
return nil
}
+type awsRestjson1_serializeOpUpdateRecoveryPointIndexSettings struct {
+}
+
+func (*awsRestjson1_serializeOpUpdateRecoveryPointIndexSettings) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpUpdateRecoveryPointIndexSettings) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*UpdateRecoveryPointIndexSettingsInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/backup-vaults/{BackupVaultName}/recovery-points/{RecoveryPointArn}/index")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "POST"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsUpdateRecoveryPointIndexSettingsInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ restEncoder.SetHeader("Content-Type").String("application/json")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsRestjson1_serializeOpDocumentUpdateRecoveryPointIndexSettingsInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsUpdateRecoveryPointIndexSettingsInput(v *UpdateRecoveryPointIndexSettingsInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.BackupVaultName == nil || len(*v.BackupVaultName) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member BackupVaultName must not be empty")}
+ }
+ if v.BackupVaultName != nil {
+ if err := encoder.SetURI("BackupVaultName").String(*v.BackupVaultName); err != nil {
+ return err
+ }
+ }
+
+ if v.RecoveryPointArn == nil || len(*v.RecoveryPointArn) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member RecoveryPointArn must not be empty")}
+ }
+ if v.RecoveryPointArn != nil {
+ if err := encoder.SetURI("RecoveryPointArn").String(*v.RecoveryPointArn); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeOpDocumentUpdateRecoveryPointIndexSettingsInput(v *UpdateRecoveryPointIndexSettingsInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.IamRoleArn != nil {
+ ok := object.Key("IamRoleArn")
+ ok.String(*v.IamRoleArn)
+ }
+
+ if len(v.Index) > 0 {
+ ok := object.Key("Index")
+ ok.String(string(v.Index))
+ }
+
+ return nil
+}
+
type awsRestjson1_serializeOpUpdateRecoveryPointLifecycle struct {
}
@@ -7790,6 +8073,13 @@ func awsRestjson1_serializeDocumentBackupRuleInput(v *types.BackupRuleInput, val
ok.Boolean(*v.EnableContinuousBackup)
}
+ if v.IndexActions != nil {
+ ok := object.Key("IndexActions")
+ if err := awsRestjson1_serializeDocumentIndexActions(v.IndexActions, ok); err != nil {
+ return err
+ }
+ }
+
if v.Lifecycle != nil {
ok := object.Key("Lifecycle")
if err := awsRestjson1_serializeDocumentLifecycle(v.Lifecycle, ok); err != nil {
@@ -8167,6 +8457,33 @@ func awsRestjson1_serializeDocumentGlobalSettings(v map[string]string, value smi
return nil
}
+func awsRestjson1_serializeDocumentIndexAction(v *types.IndexAction, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.ResourceTypes != nil {
+ ok := object.Key("ResourceTypes")
+ if err := awsRestjson1_serializeDocumentResourceTypes(v.ResourceTypes, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentIndexActions(v []types.IndexAction, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ if err := awsRestjson1_serializeDocumentIndexAction(&v[i], av); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
func awsRestjson1_serializeDocumentKeyValue(v *types.KeyValue, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
@@ -8416,6 +8733,17 @@ func awsRestjson1_serializeDocumentResourceTypeOptInPreference(v map[string]bool
return nil
}
+func awsRestjson1_serializeDocumentResourceTypes(v []string, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ av.String(v[i])
+ }
+ return nil
+}
+
func awsRestjson1_serializeDocumentRestoreTestingPlanForCreate(v *types.RestoreTestingPlanForCreate, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
diff --git a/service/backup/snapshot/api_op_GetRecoveryPointIndexDetails.go.snap b/service/backup/snapshot/api_op_GetRecoveryPointIndexDetails.go.snap
new file mode 100644
index 00000000000..886169e453b
--- /dev/null
+++ b/service/backup/snapshot/api_op_GetRecoveryPointIndexDetails.go.snap
@@ -0,0 +1,41 @@
+GetRecoveryPointIndexDetails
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backup/snapshot/api_op_ListIndexedRecoveryPoints.go.snap b/service/backup/snapshot/api_op_ListIndexedRecoveryPoints.go.snap
new file mode 100644
index 00000000000..ea68148e0cc
--- /dev/null
+++ b/service/backup/snapshot/api_op_ListIndexedRecoveryPoints.go.snap
@@ -0,0 +1,40 @@
+ListIndexedRecoveryPoints
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backup/snapshot/api_op_UpdateRecoveryPointIndexSettings.go.snap b/service/backup/snapshot/api_op_UpdateRecoveryPointIndexSettings.go.snap
new file mode 100644
index 00000000000..c407bdf52fc
--- /dev/null
+++ b/service/backup/snapshot/api_op_UpdateRecoveryPointIndexSettings.go.snap
@@ -0,0 +1,41 @@
+UpdateRecoveryPointIndexSettings
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backup/snapshot_test.go b/service/backup/snapshot_test.go
index d61bd7215aa..8be0eb63751 100644
--- a/service/backup/snapshot_test.go
+++ b/service/backup/snapshot_test.go
@@ -566,6 +566,18 @@ func TestCheckSnapshot_GetLegalHold(t *testing.T) {
}
}
+func TestCheckSnapshot_GetRecoveryPointIndexDetails(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetRecoveryPointIndexDetails(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "GetRecoveryPointIndexDetails")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_GetRecoveryPointRestoreMetadata(t *testing.T) {
svc := New(Options{})
_, err := svc.GetRecoveryPointRestoreMetadata(context.Background(), nil, func(o *Options) {
@@ -758,6 +770,18 @@ func TestCheckSnapshot_ListFrameworks(t *testing.T) {
}
}
+func TestCheckSnapshot_ListIndexedRecoveryPoints(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListIndexedRecoveryPoints(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "ListIndexedRecoveryPoints")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_ListLegalHolds(t *testing.T) {
svc := New(Options{})
_, err := svc.ListLegalHolds(context.Background(), nil, func(o *Options) {
@@ -1094,6 +1118,18 @@ func TestCheckSnapshot_UpdateGlobalSettings(t *testing.T) {
}
}
+func TestCheckSnapshot_UpdateRecoveryPointIndexSettings(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateRecoveryPointIndexSettings(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "UpdateRecoveryPointIndexSettings")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_UpdateRecoveryPointLifecycle(t *testing.T) {
svc := New(Options{})
_, err := svc.UpdateRecoveryPointLifecycle(context.Background(), nil, func(o *Options) {
@@ -1657,6 +1693,18 @@ func TestUpdateSnapshot_GetLegalHold(t *testing.T) {
}
}
+func TestUpdateSnapshot_GetRecoveryPointIndexDetails(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetRecoveryPointIndexDetails(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "GetRecoveryPointIndexDetails")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_GetRecoveryPointRestoreMetadata(t *testing.T) {
svc := New(Options{})
_, err := svc.GetRecoveryPointRestoreMetadata(context.Background(), nil, func(o *Options) {
@@ -1849,6 +1897,18 @@ func TestUpdateSnapshot_ListFrameworks(t *testing.T) {
}
}
+func TestUpdateSnapshot_ListIndexedRecoveryPoints(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListIndexedRecoveryPoints(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "ListIndexedRecoveryPoints")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_ListLegalHolds(t *testing.T) {
svc := New(Options{})
_, err := svc.ListLegalHolds(context.Background(), nil, func(o *Options) {
@@ -2185,6 +2245,18 @@ func TestUpdateSnapshot_UpdateGlobalSettings(t *testing.T) {
}
}
+func TestUpdateSnapshot_UpdateRecoveryPointIndexSettings(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateRecoveryPointIndexSettings(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "UpdateRecoveryPointIndexSettings")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_UpdateRecoveryPointLifecycle(t *testing.T) {
svc := New(Options{})
_, err := svc.UpdateRecoveryPointLifecycle(context.Background(), nil, func(o *Options) {
diff --git a/service/backup/types/enums.go b/service/backup/types/enums.go
index d329687d254..dd8cd022f4d 100644
--- a/service/backup/types/enums.go
+++ b/service/backup/types/enums.go
@@ -221,6 +221,48 @@ func (CopyJobStatus) Values() []CopyJobStatus {
}
}
+type Index string
+
+// Enum values for Index
+const (
+ IndexEnabled Index = "ENABLED"
+ IndexDisabled Index = "DISABLED"
+)
+
+// Values returns all known values for Index. Note that this can be expanded in
+// the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (Index) Values() []Index {
+ return []Index{
+ "ENABLED",
+ "DISABLED",
+ }
+}
+
+type IndexStatus string
+
+// Enum values for IndexStatus
+const (
+ IndexStatusPending IndexStatus = "PENDING"
+ IndexStatusActive IndexStatus = "ACTIVE"
+ IndexStatusFailed IndexStatus = "FAILED"
+ IndexStatusDeleting IndexStatus = "DELETING"
+)
+
+// Values returns all known values for IndexStatus. Note that this can be expanded
+// in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (IndexStatus) Values() []IndexStatus {
+ return []IndexStatus{
+ "PENDING",
+ "ACTIVE",
+ "FAILED",
+ "DELETING",
+ }
+}
+
type LegalHoldStatus string
// Enum values for LegalHoldStatus
diff --git a/service/backup/types/types.go b/service/backup/types/types.go
index b5728a36e2b..847542c4045 100644
--- a/service/backup/types/types.go
+++ b/service/backup/types/types.go
@@ -363,6 +363,14 @@ type BackupRule struct {
// specified) causes Backup to create snapshot backups.
EnableContinuousBackup *bool
+ // IndexActions is an array you use to specify how backup data should be indexed.
+ //
+ // eEach BackupRule can have 0 or 1 IndexAction, as each backup can have up to one
+ // index associated with it.
+ //
+ // Within the array is ResourceType. Only one will be accepted for each BackupRule.
+ IndexActions []IndexAction
+
// The lifecycle defines when a protected resource is transitioned to cold storage
// and when it expires. Backup transitions and expires backups automatically
// according to the lifecycle that you define.
@@ -445,6 +453,17 @@ type BackupRuleInput struct {
// specified) causes Backup to create snapshot backups.
EnableContinuousBackup *bool
+ // There can up to one IndexAction in each BackupRule, as each backup can have 0
+ // or 1 backup index associated with it.
+ //
+ // Within the array is ResourceTypes. Only 1 resource type will be accepted for
+ // each BackupRule. Valid values:
+ //
+ // - EBS for Amazon Elastic Block Store
+ //
+ // - S3 for Amazon Simple Storage Service (Amazon S3)
+ IndexActions []IndexAction
+
// The lifecycle defines when a protected resource is transitioned to cold storage
// and when it expires. Backup will transition and expire backups automatically
// according to the lifecycle that you define.
@@ -1103,6 +1122,82 @@ type FrameworkControl struct {
noSmithyDocumentSerde
}
+// This is an optional array within a BackupRule.
+//
+// IndexAction consists of one ResourceTypes.
+type IndexAction struct {
+
+ // 0 or 1 index action will be accepted for each BackupRule.
+ //
+ // Valid values:
+ //
+ // - EBS for Amazon Elastic Block Store
+ //
+ // - S3 for Amazon Simple Storage Service (Amazon S3)
+ ResourceTypes []string
+
+ noSmithyDocumentSerde
+}
+
+// This is a recovery point that has an associated backup index.
+//
+// Only recovery points with a backup index can be included in a search.
+type IndexedRecoveryPoint struct {
+
+ // The date and time that a backup was created, in Unix format and Coordinated
+ // Universal Time (UTC). The value of CreationDate is accurate to milliseconds.
+ // For example, the value 1516925490.087 represents Friday, January 26, 2018
+ // 12:11:30.087 AM.
+ BackupCreationDate *time.Time
+
+ // An ARN that uniquely identifies the backup vault where the recovery point index
+ // is stored.
+ //
+ // For example, arn:aws:backup:us-east-1:123456789012:backup-vault:aBackupVault .
+ BackupVaultArn *string
+
+ // This specifies the IAM role ARN used for this operation.
+ //
+ // For example, arn:aws:iam::123456789012:role/S3Access
+ IamRoleArn *string
+
+ // The date and time that a backup index was created, in Unix format and
+ // Coordinated Universal Time (UTC). The value of CreationDate is accurate to
+ // milliseconds. For example, the value 1516925490.087 represents Friday, January
+ // 26, 2018 12:11:30.087 AM.
+ IndexCreationDate *time.Time
+
+ // This is the current status for the backup index associated with the specified
+ // recovery point.
+ //
+ // Statuses are: PENDING | ACTIVE | FAILED | DELETING
+ //
+ // A recovery point with an index that has the status of ACTIVE can be included in
+ // a search.
+ IndexStatus IndexStatus
+
+ // A string in the form of a detailed message explaining the status of a backup
+ // index associated with the recovery point.
+ IndexStatusMessage *string
+
+ // An ARN that uniquely identifies a recovery point; for example,
+ // arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
+ RecoveryPointArn *string
+
+ // The resource type of the indexed recovery point.
+ //
+ // - EBS for Amazon Elastic Block Store
+ //
+ // - S3 for Amazon Simple Storage Service (Amazon S3)
+ ResourceType *string
+
+ // A string of the Amazon Resource Name (ARN) that uniquely identifies the source
+ // resource.
+ SourceResourceArn *string
+
+ noSmithyDocumentSerde
+}
+
// Pair of two related strings. Allowed characters are letters, white space, and
// numbers that can be represented in UTF-8 and the following characters: + - = .
// _ : /
@@ -1299,6 +1394,19 @@ type RecoveryPointByBackupVault struct {
// example, arn:aws:iam::123456789012:role/S3Access .
IamRoleArn *string
+ // This is the current status for the backup index associated with the specified
+ // recovery point.
+ //
+ // Statuses are: PENDING | ACTIVE | FAILED | DELETING
+ //
+ // A recovery point with an index that has the status of ACTIVE can be included in
+ // a search.
+ IndexStatus IndexStatus
+
+ // A string in the form of a detailed message explaining the status of a backup
+ // index associated with the recovery point.
+ IndexStatusMessage *string
+
// A Boolean value that is returned as TRUE if the specified recovery point is
// encrypted, or FALSE if the recovery point is not encrypted.
IsEncrypted bool
@@ -1387,6 +1495,19 @@ type RecoveryPointByResource struct {
// arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab .
EncryptionKeyArn *string
+ // This is the current status for the backup index associated with the specified
+ // recovery point.
+ //
+ // Statuses are: PENDING | ACTIVE | FAILED | DELETING
+ //
+ // A recovery point with an index that has the status of ACTIVE can be included in
+ // a search.
+ IndexStatus IndexStatus
+
+ // A string in the form of a detailed message explaining the status of a backup
+ // index associated with the recovery point.
+ IndexStatusMessage *string
+
// This is a boolean value indicating this is a parent (composite) recovery point.
IsParent bool
diff --git a/service/backup/validators.go b/service/backup/validators.go
index 216b4787be0..404262491dd 100644
--- a/service/backup/validators.go
+++ b/service/backup/validators.go
@@ -810,6 +810,26 @@ func (m *validateOpGetLegalHold) HandleInitialize(ctx context.Context, in middle
return next.HandleInitialize(ctx, in)
}
+type validateOpGetRecoveryPointIndexDetails struct {
+}
+
+func (*validateOpGetRecoveryPointIndexDetails) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpGetRecoveryPointIndexDetails) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*GetRecoveryPointIndexDetailsInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpGetRecoveryPointIndexDetailsInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpGetRecoveryPointRestoreMetadata struct {
}
@@ -1350,6 +1370,26 @@ func (m *validateOpUpdateFramework) HandleInitialize(ctx context.Context, in mid
return next.HandleInitialize(ctx, in)
}
+type validateOpUpdateRecoveryPointIndexSettings struct {
+}
+
+func (*validateOpUpdateRecoveryPointIndexSettings) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpUpdateRecoveryPointIndexSettings) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*UpdateRecoveryPointIndexSettingsInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpUpdateRecoveryPointIndexSettingsInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpUpdateRecoveryPointLifecycle struct {
}
@@ -1590,6 +1630,10 @@ func addOpGetLegalHoldValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpGetLegalHold{}, middleware.After)
}
+func addOpGetRecoveryPointIndexDetailsValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpGetRecoveryPointIndexDetails{}, middleware.After)
+}
+
func addOpGetRecoveryPointRestoreMetadataValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpGetRecoveryPointRestoreMetadata{}, middleware.After)
}
@@ -1698,6 +1742,10 @@ func addOpUpdateFrameworkValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpUpdateFramework{}, middleware.After)
}
+func addOpUpdateRecoveryPointIndexSettingsValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpUpdateRecoveryPointIndexSettings{}, middleware.After)
+}
+
func addOpUpdateRecoveryPointLifecycleValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpUpdateRecoveryPointLifecycle{}, middleware.After)
}
@@ -2768,6 +2816,24 @@ func validateOpGetLegalHoldInput(v *GetLegalHoldInput) error {
}
}
+func validateOpGetRecoveryPointIndexDetailsInput(v *GetRecoveryPointIndexDetailsInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "GetRecoveryPointIndexDetailsInput"}
+ if v.BackupVaultName == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("BackupVaultName"))
+ }
+ if v.RecoveryPointArn == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("RecoveryPointArn"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpGetRecoveryPointRestoreMetadataInput(v *GetRecoveryPointRestoreMetadataInput) error {
if v == nil {
return nil
@@ -3227,6 +3293,27 @@ func validateOpUpdateFrameworkInput(v *UpdateFrameworkInput) error {
}
}
+func validateOpUpdateRecoveryPointIndexSettingsInput(v *UpdateRecoveryPointIndexSettingsInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "UpdateRecoveryPointIndexSettingsInput"}
+ if v.BackupVaultName == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("BackupVaultName"))
+ }
+ if v.RecoveryPointArn == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("RecoveryPointArn"))
+ }
+ if len(v.Index) == 0 {
+ invalidParams.Add(smithy.NewErrParamRequired("Index"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpUpdateRecoveryPointLifecycleInput(v *UpdateRecoveryPointLifecycleInput) error {
if v == nil {
return nil
diff --git a/service/backupsearch/CHANGELOG.md b/service/backupsearch/CHANGELOG.md
new file mode 100644
index 00000000000..ddc50a30f7a
--- /dev/null
+++ b/service/backupsearch/CHANGELOG.md
@@ -0,0 +1,5 @@
+# v1.0.0 (2024-12-17)
+
+* **Release**: New AWS service client module
+* **Feature**: Add support for searching backups
+
diff --git a/service/backupsearch/LICENSE.txt b/service/backupsearch/LICENSE.txt
new file mode 100644
index 00000000000..d6456956733
--- /dev/null
+++ b/service/backupsearch/LICENSE.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/service/backupsearch/api_client.go b/service/backupsearch/api_client.go
new file mode 100644
index 00000000000..a323a84b211
--- /dev/null
+++ b/service/backupsearch/api_client.go
@@ -0,0 +1,912 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "github.com/aws/aws-sdk-go-v2/aws"
+ "github.com/aws/aws-sdk-go-v2/aws/defaults"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/aws/retry"
+ "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
+ awshttp "github.com/aws/aws-sdk-go-v2/aws/transport/http"
+ internalauth "github.com/aws/aws-sdk-go-v2/internal/auth"
+ internalauthsmithy "github.com/aws/aws-sdk-go-v2/internal/auth/smithy"
+ internalConfig "github.com/aws/aws-sdk-go-v2/internal/configsources"
+ internalmiddleware "github.com/aws/aws-sdk-go-v2/internal/middleware"
+ smithy "github.com/aws/smithy-go"
+ smithyauth "github.com/aws/smithy-go/auth"
+ smithydocument "github.com/aws/smithy-go/document"
+ "github.com/aws/smithy-go/logging"
+ "github.com/aws/smithy-go/metrics"
+ "github.com/aws/smithy-go/middleware"
+ "github.com/aws/smithy-go/tracing"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+ "net"
+ "net/http"
+ "sync/atomic"
+ "time"
+)
+
+const ServiceID = "BackupSearch"
+const ServiceAPIVersion = "2018-05-10"
+
+type operationMetrics struct {
+ Duration metrics.Float64Histogram
+ SerializeDuration metrics.Float64Histogram
+ ResolveIdentityDuration metrics.Float64Histogram
+ ResolveEndpointDuration metrics.Float64Histogram
+ SignRequestDuration metrics.Float64Histogram
+ DeserializeDuration metrics.Float64Histogram
+}
+
+func (m *operationMetrics) histogramFor(name string) metrics.Float64Histogram {
+ switch name {
+ case "client.call.duration":
+ return m.Duration
+ case "client.call.serialization_duration":
+ return m.SerializeDuration
+ case "client.call.resolve_identity_duration":
+ return m.ResolveIdentityDuration
+ case "client.call.resolve_endpoint_duration":
+ return m.ResolveEndpointDuration
+ case "client.call.signing_duration":
+ return m.SignRequestDuration
+ case "client.call.deserialization_duration":
+ return m.DeserializeDuration
+ default:
+ panic("unrecognized operation metric")
+ }
+}
+
+func timeOperationMetric[T any](
+ ctx context.Context, metric string, fn func() (T, error),
+ opts ...metrics.RecordMetricOption,
+) (T, error) {
+ instr := getOperationMetrics(ctx).histogramFor(metric)
+ opts = append([]metrics.RecordMetricOption{withOperationMetadata(ctx)}, opts...)
+
+ start := time.Now()
+ v, err := fn()
+ end := time.Now()
+
+ elapsed := end.Sub(start)
+ instr.Record(ctx, float64(elapsed)/1e9, opts...)
+ return v, err
+}
+
+func startMetricTimer(ctx context.Context, metric string, opts ...metrics.RecordMetricOption) func() {
+ instr := getOperationMetrics(ctx).histogramFor(metric)
+ opts = append([]metrics.RecordMetricOption{withOperationMetadata(ctx)}, opts...)
+
+ var ended bool
+ start := time.Now()
+ return func() {
+ if ended {
+ return
+ }
+ ended = true
+
+ end := time.Now()
+
+ elapsed := end.Sub(start)
+ instr.Record(ctx, float64(elapsed)/1e9, opts...)
+ }
+}
+
+func withOperationMetadata(ctx context.Context) metrics.RecordMetricOption {
+ return func(o *metrics.RecordMetricOptions) {
+ o.Properties.Set("rpc.service", middleware.GetServiceID(ctx))
+ o.Properties.Set("rpc.method", middleware.GetOperationName(ctx))
+ }
+}
+
+type operationMetricsKey struct{}
+
+func withOperationMetrics(parent context.Context, mp metrics.MeterProvider) (context.Context, error) {
+ meter := mp.Meter("github.com/aws/aws-sdk-go-v2/service/backupsearch")
+ om := &operationMetrics{}
+
+ var err error
+
+ om.Duration, err = operationMetricTimer(meter, "client.call.duration",
+ "Overall call duration (including retries and time to send or receive request and response body)")
+ if err != nil {
+ return nil, err
+ }
+ om.SerializeDuration, err = operationMetricTimer(meter, "client.call.serialization_duration",
+ "The time it takes to serialize a message body")
+ if err != nil {
+ return nil, err
+ }
+ om.ResolveIdentityDuration, err = operationMetricTimer(meter, "client.call.auth.resolve_identity_duration",
+ "The time taken to acquire an identity (AWS credentials, bearer token, etc) from an Identity Provider")
+ if err != nil {
+ return nil, err
+ }
+ om.ResolveEndpointDuration, err = operationMetricTimer(meter, "client.call.resolve_endpoint_duration",
+ "The time it takes to resolve an endpoint (endpoint resolver, not DNS) for the request")
+ if err != nil {
+ return nil, err
+ }
+ om.SignRequestDuration, err = operationMetricTimer(meter, "client.call.auth.signing_duration",
+ "The time it takes to sign a request")
+ if err != nil {
+ return nil, err
+ }
+ om.DeserializeDuration, err = operationMetricTimer(meter, "client.call.deserialization_duration",
+ "The time it takes to deserialize a message body")
+ if err != nil {
+ return nil, err
+ }
+
+ return context.WithValue(parent, operationMetricsKey{}, om), nil
+}
+
+func operationMetricTimer(m metrics.Meter, name, desc string) (metrics.Float64Histogram, error) {
+ return m.Float64Histogram(name, func(o *metrics.InstrumentOptions) {
+ o.UnitLabel = "s"
+ o.Description = desc
+ })
+}
+
+func getOperationMetrics(ctx context.Context) *operationMetrics {
+ return ctx.Value(operationMetricsKey{}).(*operationMetrics)
+}
+
+func operationTracer(p tracing.TracerProvider) tracing.Tracer {
+ return p.Tracer("github.com/aws/aws-sdk-go-v2/service/backupsearch")
+}
+
+// Client provides the API client to make operations call for AWS Backup Search.
+type Client struct {
+ options Options
+
+ // Difference between the time reported by the server and the client
+ timeOffset *atomic.Int64
+}
+
+// New returns an initialized Client based on the functional options. Provide
+// additional functional options to further configure the behavior of the client,
+// such as changing the client's endpoint or adding custom middleware behavior.
+func New(options Options, optFns ...func(*Options)) *Client {
+ options = options.Copy()
+
+ resolveDefaultLogger(&options)
+
+ setResolvedDefaultsMode(&options)
+
+ resolveRetryer(&options)
+
+ resolveHTTPClient(&options)
+
+ resolveHTTPSignerV4(&options)
+
+ resolveEndpointResolverV2(&options)
+
+ resolveTracerProvider(&options)
+
+ resolveMeterProvider(&options)
+
+ resolveAuthSchemeResolver(&options)
+
+ for _, fn := range optFns {
+ fn(&options)
+ }
+
+ finalizeRetryMaxAttempts(&options)
+
+ ignoreAnonymousAuth(&options)
+
+ wrapWithAnonymousAuth(&options)
+
+ resolveAuthSchemes(&options)
+
+ client := &Client{
+ options: options,
+ }
+
+ initializeTimeOffsetResolver(client)
+
+ return client
+}
+
+// Options returns a copy of the client configuration.
+//
+// Callers SHOULD NOT perform mutations on any inner structures within client
+// config. Config overrides should instead be made on a per-operation basis through
+// functional options.
+func (c *Client) Options() Options {
+ return c.options.Copy()
+}
+
+func (c *Client) invokeOperation(
+ ctx context.Context, opID string, params interface{}, optFns []func(*Options), stackFns ...func(*middleware.Stack, Options) error,
+) (
+ result interface{}, metadata middleware.Metadata, err error,
+) {
+ ctx = middleware.ClearStackValues(ctx)
+ ctx = middleware.WithServiceID(ctx, ServiceID)
+ ctx = middleware.WithOperationName(ctx, opID)
+
+ stack := middleware.NewStack(opID, smithyhttp.NewStackRequest)
+ options := c.options.Copy()
+
+ for _, fn := range optFns {
+ fn(&options)
+ }
+
+ finalizeOperationRetryMaxAttempts(&options, *c)
+
+ finalizeClientEndpointResolverOptions(&options)
+
+ for _, fn := range stackFns {
+ if err := fn(stack, options); err != nil {
+ return nil, metadata, err
+ }
+ }
+
+ for _, fn := range options.APIOptions {
+ if err := fn(stack); err != nil {
+ return nil, metadata, err
+ }
+ }
+
+ ctx, err = withOperationMetrics(ctx, options.MeterProvider)
+ if err != nil {
+ return nil, metadata, err
+ }
+
+ tracer := operationTracer(options.TracerProvider)
+ spanName := fmt.Sprintf("%s.%s", ServiceID, opID)
+
+ ctx = tracing.WithOperationTracer(ctx, tracer)
+
+ ctx, span := tracer.StartSpan(ctx, spanName, func(o *tracing.SpanOptions) {
+ o.Kind = tracing.SpanKindClient
+ o.Properties.Set("rpc.system", "aws-api")
+ o.Properties.Set("rpc.method", opID)
+ o.Properties.Set("rpc.service", ServiceID)
+ })
+ endTimer := startMetricTimer(ctx, "client.call.duration")
+ defer endTimer()
+ defer span.End()
+
+ handler := smithyhttp.NewClientHandlerWithOptions(options.HTTPClient, func(o *smithyhttp.ClientHandler) {
+ o.Meter = options.MeterProvider.Meter("github.com/aws/aws-sdk-go-v2/service/backupsearch")
+ })
+ decorated := middleware.DecorateHandler(handler, stack)
+ result, metadata, err = decorated.Handle(ctx, params)
+ if err != nil {
+ span.SetProperty("exception.type", fmt.Sprintf("%T", err))
+ span.SetProperty("exception.message", err.Error())
+
+ var aerr smithy.APIError
+ if errors.As(err, &aerr) {
+ span.SetProperty("api.error_code", aerr.ErrorCode())
+ span.SetProperty("api.error_message", aerr.ErrorMessage())
+ span.SetProperty("api.error_fault", aerr.ErrorFault().String())
+ }
+
+ err = &smithy.OperationError{
+ ServiceID: ServiceID,
+ OperationName: opID,
+ Err: err,
+ }
+ }
+
+ span.SetProperty("error", err != nil)
+ if err == nil {
+ span.SetStatus(tracing.SpanStatusOK)
+ } else {
+ span.SetStatus(tracing.SpanStatusError)
+ }
+
+ return result, metadata, err
+}
+
+type operationInputKey struct{}
+
+func setOperationInput(ctx context.Context, input interface{}) context.Context {
+ return middleware.WithStackValue(ctx, operationInputKey{}, input)
+}
+
+func getOperationInput(ctx context.Context) interface{} {
+ return middleware.GetStackValue(ctx, operationInputKey{})
+}
+
+type setOperationInputMiddleware struct {
+}
+
+func (*setOperationInputMiddleware) ID() string {
+ return "setOperationInput"
+}
+
+func (m *setOperationInputMiddleware) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ ctx = setOperationInput(ctx, in.Parameters)
+ return next.HandleSerialize(ctx, in)
+}
+
+func addProtocolFinalizerMiddlewares(stack *middleware.Stack, options Options, operation string) error {
+ if err := stack.Finalize.Add(&resolveAuthSchemeMiddleware{operation: operation, options: options}, middleware.Before); err != nil {
+ return fmt.Errorf("add ResolveAuthScheme: %w", err)
+ }
+ if err := stack.Finalize.Insert(&getIdentityMiddleware{options: options}, "ResolveAuthScheme", middleware.After); err != nil {
+ return fmt.Errorf("add GetIdentity: %v", err)
+ }
+ if err := stack.Finalize.Insert(&resolveEndpointV2Middleware{options: options}, "GetIdentity", middleware.After); err != nil {
+ return fmt.Errorf("add ResolveEndpointV2: %v", err)
+ }
+ if err := stack.Finalize.Insert(&signRequestMiddleware{options: options}, "ResolveEndpointV2", middleware.After); err != nil {
+ return fmt.Errorf("add Signing: %w", err)
+ }
+ return nil
+}
+func resolveAuthSchemeResolver(options *Options) {
+ if options.AuthSchemeResolver == nil {
+ options.AuthSchemeResolver = &defaultAuthSchemeResolver{}
+ }
+}
+
+func resolveAuthSchemes(options *Options) {
+ if options.AuthSchemes == nil {
+ options.AuthSchemes = []smithyhttp.AuthScheme{
+ internalauth.NewHTTPAuthScheme("aws.auth#sigv4", &internalauthsmithy.V4SignerAdapter{
+ Signer: options.HTTPSignerV4,
+ Logger: options.Logger,
+ LogSigning: options.ClientLogMode.IsSigning(),
+ }),
+ }
+ }
+}
+
+type noSmithyDocumentSerde = smithydocument.NoSerde
+
+type legacyEndpointContextSetter struct {
+ LegacyResolver EndpointResolver
+}
+
+func (*legacyEndpointContextSetter) ID() string {
+ return "legacyEndpointContextSetter"
+}
+
+func (m *legacyEndpointContextSetter) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ if m.LegacyResolver != nil {
+ ctx = awsmiddleware.SetRequiresLegacyEndpoints(ctx, true)
+ }
+
+ return next.HandleInitialize(ctx, in)
+
+}
+func addlegacyEndpointContextSetter(stack *middleware.Stack, o Options) error {
+ return stack.Initialize.Add(&legacyEndpointContextSetter{
+ LegacyResolver: o.EndpointResolver,
+ }, middleware.Before)
+}
+
+func resolveDefaultLogger(o *Options) {
+ if o.Logger != nil {
+ return
+ }
+ o.Logger = logging.Nop{}
+}
+
+func addSetLoggerMiddleware(stack *middleware.Stack, o Options) error {
+ return middleware.AddSetLoggerMiddleware(stack, o.Logger)
+}
+
+func setResolvedDefaultsMode(o *Options) {
+ if len(o.resolvedDefaultsMode) > 0 {
+ return
+ }
+
+ var mode aws.DefaultsMode
+ mode.SetFromString(string(o.DefaultsMode))
+
+ if mode == aws.DefaultsModeAuto {
+ mode = defaults.ResolveDefaultsModeAuto(o.Region, o.RuntimeEnvironment)
+ }
+
+ o.resolvedDefaultsMode = mode
+}
+
+// NewFromConfig returns a new client from the provided config.
+func NewFromConfig(cfg aws.Config, optFns ...func(*Options)) *Client {
+ opts := Options{
+ Region: cfg.Region,
+ DefaultsMode: cfg.DefaultsMode,
+ RuntimeEnvironment: cfg.RuntimeEnvironment,
+ HTTPClient: cfg.HTTPClient,
+ Credentials: cfg.Credentials,
+ APIOptions: cfg.APIOptions,
+ Logger: cfg.Logger,
+ ClientLogMode: cfg.ClientLogMode,
+ AppID: cfg.AppID,
+ }
+ resolveAWSRetryerProvider(cfg, &opts)
+ resolveAWSRetryMaxAttempts(cfg, &opts)
+ resolveAWSRetryMode(cfg, &opts)
+ resolveAWSEndpointResolver(cfg, &opts)
+ resolveUseDualStackEndpoint(cfg, &opts)
+ resolveUseFIPSEndpoint(cfg, &opts)
+ resolveBaseEndpoint(cfg, &opts)
+ return New(opts, optFns...)
+}
+
+func resolveHTTPClient(o *Options) {
+ var buildable *awshttp.BuildableClient
+
+ if o.HTTPClient != nil {
+ var ok bool
+ buildable, ok = o.HTTPClient.(*awshttp.BuildableClient)
+ if !ok {
+ return
+ }
+ } else {
+ buildable = awshttp.NewBuildableClient()
+ }
+
+ modeConfig, err := defaults.GetModeConfiguration(o.resolvedDefaultsMode)
+ if err == nil {
+ buildable = buildable.WithDialerOptions(func(dialer *net.Dialer) {
+ if dialerTimeout, ok := modeConfig.GetConnectTimeout(); ok {
+ dialer.Timeout = dialerTimeout
+ }
+ })
+
+ buildable = buildable.WithTransportOptions(func(transport *http.Transport) {
+ if tlsHandshakeTimeout, ok := modeConfig.GetTLSNegotiationTimeout(); ok {
+ transport.TLSHandshakeTimeout = tlsHandshakeTimeout
+ }
+ })
+ }
+
+ o.HTTPClient = buildable
+}
+
+func resolveRetryer(o *Options) {
+ if o.Retryer != nil {
+ return
+ }
+
+ if len(o.RetryMode) == 0 {
+ modeConfig, err := defaults.GetModeConfiguration(o.resolvedDefaultsMode)
+ if err == nil {
+ o.RetryMode = modeConfig.RetryMode
+ }
+ }
+ if len(o.RetryMode) == 0 {
+ o.RetryMode = aws.RetryModeStandard
+ }
+
+ var standardOptions []func(*retry.StandardOptions)
+ if v := o.RetryMaxAttempts; v != 0 {
+ standardOptions = append(standardOptions, func(so *retry.StandardOptions) {
+ so.MaxAttempts = v
+ })
+ }
+
+ switch o.RetryMode {
+ case aws.RetryModeAdaptive:
+ var adaptiveOptions []func(*retry.AdaptiveModeOptions)
+ if len(standardOptions) != 0 {
+ adaptiveOptions = append(adaptiveOptions, func(ao *retry.AdaptiveModeOptions) {
+ ao.StandardOptions = append(ao.StandardOptions, standardOptions...)
+ })
+ }
+ o.Retryer = retry.NewAdaptiveMode(adaptiveOptions...)
+
+ default:
+ o.Retryer = retry.NewStandard(standardOptions...)
+ }
+}
+
+func resolveAWSRetryerProvider(cfg aws.Config, o *Options) {
+ if cfg.Retryer == nil {
+ return
+ }
+ o.Retryer = cfg.Retryer()
+}
+
+func resolveAWSRetryMode(cfg aws.Config, o *Options) {
+ if len(cfg.RetryMode) == 0 {
+ return
+ }
+ o.RetryMode = cfg.RetryMode
+}
+func resolveAWSRetryMaxAttempts(cfg aws.Config, o *Options) {
+ if cfg.RetryMaxAttempts == 0 {
+ return
+ }
+ o.RetryMaxAttempts = cfg.RetryMaxAttempts
+}
+
+func finalizeRetryMaxAttempts(o *Options) {
+ if o.RetryMaxAttempts == 0 {
+ return
+ }
+
+ o.Retryer = retry.AddWithMaxAttempts(o.Retryer, o.RetryMaxAttempts)
+}
+
+func finalizeOperationRetryMaxAttempts(o *Options, client Client) {
+ if v := o.RetryMaxAttempts; v == 0 || v == client.options.RetryMaxAttempts {
+ return
+ }
+
+ o.Retryer = retry.AddWithMaxAttempts(o.Retryer, o.RetryMaxAttempts)
+}
+
+func resolveAWSEndpointResolver(cfg aws.Config, o *Options) {
+ if cfg.EndpointResolver == nil && cfg.EndpointResolverWithOptions == nil {
+ return
+ }
+ o.EndpointResolver = withEndpointResolver(cfg.EndpointResolver, cfg.EndpointResolverWithOptions)
+}
+
+func addClientUserAgent(stack *middleware.Stack, options Options) error {
+ ua, err := getOrAddRequestUserAgent(stack)
+ if err != nil {
+ return err
+ }
+
+ ua.AddSDKAgentKeyValue(awsmiddleware.APIMetadata, "backupsearch", goModuleVersion)
+ if len(options.AppID) > 0 {
+ ua.AddSDKAgentKey(awsmiddleware.ApplicationIdentifier, options.AppID)
+ }
+
+ return nil
+}
+
+func getOrAddRequestUserAgent(stack *middleware.Stack) (*awsmiddleware.RequestUserAgent, error) {
+ id := (*awsmiddleware.RequestUserAgent)(nil).ID()
+ mw, ok := stack.Build.Get(id)
+ if !ok {
+ mw = awsmiddleware.NewRequestUserAgent()
+ if err := stack.Build.Add(mw, middleware.After); err != nil {
+ return nil, err
+ }
+ }
+
+ ua, ok := mw.(*awsmiddleware.RequestUserAgent)
+ if !ok {
+ return nil, fmt.Errorf("%T for %s middleware did not match expected type", mw, id)
+ }
+
+ return ua, nil
+}
+
+type HTTPSignerV4 interface {
+ SignHTTP(ctx context.Context, credentials aws.Credentials, r *http.Request, payloadHash string, service string, region string, signingTime time.Time, optFns ...func(*v4.SignerOptions)) error
+}
+
+func resolveHTTPSignerV4(o *Options) {
+ if o.HTTPSignerV4 != nil {
+ return
+ }
+ o.HTTPSignerV4 = newDefaultV4Signer(*o)
+}
+
+func newDefaultV4Signer(o Options) *v4.Signer {
+ return v4.NewSigner(func(so *v4.SignerOptions) {
+ so.Logger = o.Logger
+ so.LogSigning = o.ClientLogMode.IsSigning()
+ })
+}
+
+func addClientRequestID(stack *middleware.Stack) error {
+ return stack.Build.Add(&awsmiddleware.ClientRequestID{}, middleware.After)
+}
+
+func addComputeContentLength(stack *middleware.Stack) error {
+ return stack.Build.Add(&smithyhttp.ComputeContentLength{}, middleware.After)
+}
+
+func addRawResponseToMetadata(stack *middleware.Stack) error {
+ return stack.Deserialize.Add(&awsmiddleware.AddRawResponse{}, middleware.Before)
+}
+
+func addRecordResponseTiming(stack *middleware.Stack) error {
+ return stack.Deserialize.Add(&awsmiddleware.RecordResponseTiming{}, middleware.After)
+}
+
+func addSpanRetryLoop(stack *middleware.Stack, options Options) error {
+ return stack.Finalize.Insert(&spanRetryLoop{options: options}, "Retry", middleware.Before)
+}
+
+type spanRetryLoop struct {
+ options Options
+}
+
+func (*spanRetryLoop) ID() string {
+ return "spanRetryLoop"
+}
+
+func (m *spanRetryLoop) HandleFinalize(
+ ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler,
+) (
+ middleware.FinalizeOutput, middleware.Metadata, error,
+) {
+ tracer := operationTracer(m.options.TracerProvider)
+ ctx, span := tracer.StartSpan(ctx, "RetryLoop")
+ defer span.End()
+
+ return next.HandleFinalize(ctx, in)
+}
+func addStreamingEventsPayload(stack *middleware.Stack) error {
+ return stack.Finalize.Add(&v4.StreamingEventsPayload{}, middleware.Before)
+}
+
+func addUnsignedPayload(stack *middleware.Stack) error {
+ return stack.Finalize.Insert(&v4.UnsignedPayload{}, "ResolveEndpointV2", middleware.After)
+}
+
+func addComputePayloadSHA256(stack *middleware.Stack) error {
+ return stack.Finalize.Insert(&v4.ComputePayloadSHA256{}, "ResolveEndpointV2", middleware.After)
+}
+
+func addContentSHA256Header(stack *middleware.Stack) error {
+ return stack.Finalize.Insert(&v4.ContentSHA256Header{}, (*v4.ComputePayloadSHA256)(nil).ID(), middleware.After)
+}
+
+func addIsWaiterUserAgent(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ ua, err := getOrAddRequestUserAgent(stack)
+ if err != nil {
+ return err
+ }
+
+ ua.AddUserAgentFeature(awsmiddleware.UserAgentFeatureWaiter)
+ return nil
+ })
+}
+
+func addIsPaginatorUserAgent(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ ua, err := getOrAddRequestUserAgent(stack)
+ if err != nil {
+ return err
+ }
+
+ ua.AddUserAgentFeature(awsmiddleware.UserAgentFeaturePaginator)
+ return nil
+ })
+}
+
+func addRetry(stack *middleware.Stack, o Options) error {
+ attempt := retry.NewAttemptMiddleware(o.Retryer, smithyhttp.RequestCloner, func(m *retry.Attempt) {
+ m.LogAttempts = o.ClientLogMode.IsRetries()
+ m.OperationMeter = o.MeterProvider.Meter("github.com/aws/aws-sdk-go-v2/service/backupsearch")
+ })
+ if err := stack.Finalize.Insert(attempt, "Signing", middleware.Before); err != nil {
+ return err
+ }
+ if err := stack.Finalize.Insert(&retry.MetricsHeader{}, attempt.ID(), middleware.After); err != nil {
+ return err
+ }
+ return nil
+}
+
+// resolves dual-stack endpoint configuration
+func resolveUseDualStackEndpoint(cfg aws.Config, o *Options) error {
+ if len(cfg.ConfigSources) == 0 {
+ return nil
+ }
+ value, found, err := internalConfig.ResolveUseDualStackEndpoint(context.Background(), cfg.ConfigSources)
+ if err != nil {
+ return err
+ }
+ if found {
+ o.EndpointOptions.UseDualStackEndpoint = value
+ }
+ return nil
+}
+
+// resolves FIPS endpoint configuration
+func resolveUseFIPSEndpoint(cfg aws.Config, o *Options) error {
+ if len(cfg.ConfigSources) == 0 {
+ return nil
+ }
+ value, found, err := internalConfig.ResolveUseFIPSEndpoint(context.Background(), cfg.ConfigSources)
+ if err != nil {
+ return err
+ }
+ if found {
+ o.EndpointOptions.UseFIPSEndpoint = value
+ }
+ return nil
+}
+
+func resolveAccountID(identity smithyauth.Identity, mode aws.AccountIDEndpointMode) *string {
+ if mode == aws.AccountIDEndpointModeDisabled {
+ return nil
+ }
+
+ if ca, ok := identity.(*internalauthsmithy.CredentialsAdapter); ok && ca.Credentials.AccountID != "" {
+ return aws.String(ca.Credentials.AccountID)
+ }
+
+ return nil
+}
+
+func addTimeOffsetBuild(stack *middleware.Stack, c *Client) error {
+ mw := internalmiddleware.AddTimeOffsetMiddleware{Offset: c.timeOffset}
+ if err := stack.Build.Add(&mw, middleware.After); err != nil {
+ return err
+ }
+ return stack.Deserialize.Insert(&mw, "RecordResponseTiming", middleware.Before)
+}
+func initializeTimeOffsetResolver(c *Client) {
+ c.timeOffset = new(atomic.Int64)
+}
+
+func addUserAgentRetryMode(stack *middleware.Stack, options Options) error {
+ ua, err := getOrAddRequestUserAgent(stack)
+ if err != nil {
+ return err
+ }
+
+ switch options.Retryer.(type) {
+ case *retry.Standard:
+ ua.AddUserAgentFeature(awsmiddleware.UserAgentFeatureRetryModeStandard)
+ case *retry.AdaptiveMode:
+ ua.AddUserAgentFeature(awsmiddleware.UserAgentFeatureRetryModeAdaptive)
+ }
+ return nil
+}
+
+func resolveTracerProvider(options *Options) {
+ if options.TracerProvider == nil {
+ options.TracerProvider = &tracing.NopTracerProvider{}
+ }
+}
+
+func resolveMeterProvider(options *Options) {
+ if options.MeterProvider == nil {
+ options.MeterProvider = metrics.NopMeterProvider{}
+ }
+}
+
+func addRecursionDetection(stack *middleware.Stack) error {
+ return stack.Build.Add(&awsmiddleware.RecursionDetection{}, middleware.After)
+}
+
+func addRequestIDRetrieverMiddleware(stack *middleware.Stack) error {
+ return stack.Deserialize.Insert(&awsmiddleware.RequestIDRetriever{}, "OperationDeserializer", middleware.Before)
+
+}
+
+func addResponseErrorMiddleware(stack *middleware.Stack) error {
+ return stack.Deserialize.Insert(&awshttp.ResponseErrorWrapper{}, "RequestIDRetriever", middleware.Before)
+
+}
+
+func addRequestResponseLogging(stack *middleware.Stack, o Options) error {
+ return stack.Deserialize.Add(&smithyhttp.RequestResponseLogger{
+ LogRequest: o.ClientLogMode.IsRequest(),
+ LogRequestWithBody: o.ClientLogMode.IsRequestWithBody(),
+ LogResponse: o.ClientLogMode.IsResponse(),
+ LogResponseWithBody: o.ClientLogMode.IsResponseWithBody(),
+ }, middleware.After)
+}
+
+type disableHTTPSMiddleware struct {
+ DisableHTTPS bool
+}
+
+func (*disableHTTPSMiddleware) ID() string {
+ return "disableHTTPS"
+}
+
+func (m *disableHTTPSMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+ out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+ req, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+ }
+
+ if m.DisableHTTPS && !smithyhttp.GetHostnameImmutable(ctx) {
+ req.URL.Scheme = "http"
+ }
+
+ return next.HandleFinalize(ctx, in)
+}
+
+func addDisableHTTPSMiddleware(stack *middleware.Stack, o Options) error {
+ return stack.Finalize.Insert(&disableHTTPSMiddleware{
+ DisableHTTPS: o.EndpointOptions.DisableHTTPS,
+ }, "ResolveEndpointV2", middleware.After)
+}
+
+type spanInitializeStart struct {
+}
+
+func (*spanInitializeStart) ID() string {
+ return "spanInitializeStart"
+}
+
+func (m *spanInitializeStart) HandleInitialize(
+ ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler,
+) (
+ middleware.InitializeOutput, middleware.Metadata, error,
+) {
+ ctx, _ = tracing.StartSpan(ctx, "Initialize")
+
+ return next.HandleInitialize(ctx, in)
+}
+
+type spanInitializeEnd struct {
+}
+
+func (*spanInitializeEnd) ID() string {
+ return "spanInitializeEnd"
+}
+
+func (m *spanInitializeEnd) HandleInitialize(
+ ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler,
+) (
+ middleware.InitializeOutput, middleware.Metadata, error,
+) {
+ ctx, span := tracing.PopSpan(ctx)
+ span.End()
+
+ return next.HandleInitialize(ctx, in)
+}
+
+type spanBuildRequestStart struct {
+}
+
+func (*spanBuildRequestStart) ID() string {
+ return "spanBuildRequestStart"
+}
+
+func (m *spanBuildRequestStart) HandleSerialize(
+ ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler,
+) (
+ middleware.SerializeOutput, middleware.Metadata, error,
+) {
+ ctx, _ = tracing.StartSpan(ctx, "BuildRequest")
+
+ return next.HandleSerialize(ctx, in)
+}
+
+type spanBuildRequestEnd struct {
+}
+
+func (*spanBuildRequestEnd) ID() string {
+ return "spanBuildRequestEnd"
+}
+
+func (m *spanBuildRequestEnd) HandleBuild(
+ ctx context.Context, in middleware.BuildInput, next middleware.BuildHandler,
+) (
+ middleware.BuildOutput, middleware.Metadata, error,
+) {
+ ctx, span := tracing.PopSpan(ctx)
+ span.End()
+
+ return next.HandleBuild(ctx, in)
+}
+
+func addSpanInitializeStart(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&spanInitializeStart{}, middleware.Before)
+}
+
+func addSpanInitializeEnd(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&spanInitializeEnd{}, middleware.After)
+}
+
+func addSpanBuildRequestStart(stack *middleware.Stack) error {
+ return stack.Serialize.Add(&spanBuildRequestStart{}, middleware.Before)
+}
+
+func addSpanBuildRequestEnd(stack *middleware.Stack) error {
+ return stack.Build.Add(&spanBuildRequestEnd{}, middleware.After)
+}
diff --git a/service/backupsearch/api_client_test.go b/service/backupsearch/api_client_test.go
new file mode 100644
index 00000000000..bccd06b14f3
--- /dev/null
+++ b/service/backupsearch/api_client_test.go
@@ -0,0 +1,127 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "github.com/aws/aws-sdk-go-v2/aws"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+ "io/ioutil"
+ "net/http"
+ "strings"
+ "testing"
+)
+
+func TestClient_resolveRetryOptions(t *testing.T) {
+ nopClient := smithyhttp.ClientDoFunc(func(_ *http.Request) (*http.Response, error) {
+ return &http.Response{
+ StatusCode: 200,
+ Header: http.Header{},
+ Body: ioutil.NopCloser(strings.NewReader("")),
+ }, nil
+ })
+
+ cases := map[string]struct {
+ defaultsMode aws.DefaultsMode
+ retryer aws.Retryer
+ retryMaxAttempts int
+ opRetryMaxAttempts *int
+ retryMode aws.RetryMode
+ expectClientRetryMode aws.RetryMode
+ expectClientMaxAttempts int
+ expectOpMaxAttempts int
+ }{
+ "defaults": {
+ defaultsMode: aws.DefaultsModeStandard,
+ expectClientRetryMode: aws.RetryModeStandard,
+ expectClientMaxAttempts: 3,
+ expectOpMaxAttempts: 3,
+ },
+ "custom default retry": {
+ retryMode: aws.RetryModeAdaptive,
+ retryMaxAttempts: 10,
+ expectClientRetryMode: aws.RetryModeAdaptive,
+ expectClientMaxAttempts: 10,
+ expectOpMaxAttempts: 10,
+ },
+ "custom op max attempts": {
+ retryMode: aws.RetryModeAdaptive,
+ retryMaxAttempts: 10,
+ opRetryMaxAttempts: aws.Int(2),
+ expectClientRetryMode: aws.RetryModeAdaptive,
+ expectClientMaxAttempts: 10,
+ expectOpMaxAttempts: 2,
+ },
+ "custom op no change max attempts": {
+ retryMode: aws.RetryModeAdaptive,
+ retryMaxAttempts: 10,
+ opRetryMaxAttempts: aws.Int(10),
+ expectClientRetryMode: aws.RetryModeAdaptive,
+ expectClientMaxAttempts: 10,
+ expectOpMaxAttempts: 10,
+ },
+ "custom op 0 max attempts": {
+ retryMode: aws.RetryModeAdaptive,
+ retryMaxAttempts: 10,
+ opRetryMaxAttempts: aws.Int(0),
+ expectClientRetryMode: aws.RetryModeAdaptive,
+ expectClientMaxAttempts: 10,
+ expectOpMaxAttempts: 10,
+ },
+ }
+
+ for name, c := range cases {
+ t.Run(name, func(t *testing.T) {
+ client := NewFromConfig(aws.Config{
+ DefaultsMode: c.defaultsMode,
+ Retryer: func() func() aws.Retryer {
+ if c.retryer == nil {
+ return nil
+ }
+
+ return func() aws.Retryer { return c.retryer }
+ }(),
+ HTTPClient: nopClient,
+ RetryMaxAttempts: c.retryMaxAttempts,
+ RetryMode: c.retryMode,
+ }, func(o *Options) {
+ if o.Retryer == nil {
+ t.Errorf("retryer must not be nil in functional options")
+ }
+ })
+
+ if e, a := c.expectClientRetryMode, client.options.RetryMode; e != a {
+ t.Errorf("expect %v retry mode, got %v", e, a)
+ }
+ if e, a := c.expectClientMaxAttempts, client.options.Retryer.MaxAttempts(); e != a {
+ t.Errorf("expect %v max attempts, got %v", e, a)
+ }
+
+ _, _, err := client.invokeOperation(context.Background(), "mockOperation", struct{}{},
+ []func(*Options){
+ func(o *Options) {
+ if c.opRetryMaxAttempts == nil {
+ return
+ }
+ o.RetryMaxAttempts = *c.opRetryMaxAttempts
+ },
+ },
+ func(s *middleware.Stack, o Options) error {
+ s.Initialize.Clear()
+ s.Serialize.Clear()
+ s.Build.Clear()
+ s.Finalize.Clear()
+ s.Deserialize.Clear()
+
+ if e, a := c.expectOpMaxAttempts, o.Retryer.MaxAttempts(); e != a {
+ t.Errorf("expect %v op max attempts, got %v", e, a)
+ }
+ return nil
+ })
+ if err != nil {
+ t.Fatalf("expect no operation error, got %v", err)
+ }
+ })
+ }
+}
diff --git a/service/backupsearch/api_op_GetSearchJob.go b/service/backupsearch/api_op_GetSearchJob.go
new file mode 100644
index 00000000000..81011fdabea
--- /dev/null
+++ b/service/backupsearch/api_op_GetSearchJob.go
@@ -0,0 +1,227 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/backupsearch/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+ "time"
+)
+
+// This operation retrieves metadata of a search job, including its progress.
+func (c *Client) GetSearchJob(ctx context.Context, params *GetSearchJobInput, optFns ...func(*Options)) (*GetSearchJobOutput, error) {
+ if params == nil {
+ params = &GetSearchJobInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "GetSearchJob", params, optFns, c.addOperationGetSearchJobMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*GetSearchJobOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type GetSearchJobInput struct {
+
+ // Required unique string that specifies the search job.
+ //
+ // This member is required.
+ SearchJobIdentifier *string
+
+ noSmithyDocumentSerde
+}
+
+type GetSearchJobOutput struct {
+
+ // The date and time that a search job was created, in Unix format and Coordinated
+ // Universal Time (UTC). The value of CompletionTime is accurate to milliseconds.
+ // For example, the value 1516925490.087 represents Friday, January 26, 2018
+ // 12:11:30.087 AM.
+ //
+ // This member is required.
+ CreationTime *time.Time
+
+ // Item Filters represent all input item properties specified when the search was
+ // created.
+ //
+ // This member is required.
+ ItemFilters *types.ItemFilters
+
+ // The unique string that identifies the Amazon Resource Name (ARN) of the
+ // specified search job.
+ //
+ // This member is required.
+ SearchJobArn *string
+
+ // The unique string that identifies the specified search job.
+ //
+ // This member is required.
+ SearchJobIdentifier *string
+
+ // The search scope is all backup properties input into a search.
+ //
+ // This member is required.
+ SearchScope *types.SearchScope
+
+ // The current status of the specified search job.
+ //
+ // A search job may have one of the following statuses: RUNNING ; COMPLETED ;
+ // STOPPED ; FAILED ; TIMED_OUT ; or EXPIRED .
+ //
+ // This member is required.
+ Status types.SearchJobState
+
+ // The date and time that a search job completed, in Unix format and Coordinated
+ // Universal Time (UTC). The value of CompletionTime is accurate to milliseconds.
+ // For example, the value 1516925490.087 represents Friday, January 26, 2018
+ // 12:11:30.087 AM.
+ CompletionTime *time.Time
+
+ // Returns numbers representing BackupsScannedCount, ItemsScanned, and
+ // ItemsMatched.
+ CurrentSearchProgress *types.CurrentSearchProgress
+
+ // The encryption key for the specified search job.
+ //
+ // Example:
+ // arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab .
+ EncryptionKeyArn *string
+
+ // Returned name of the specified search job.
+ Name *string
+
+ // Returned summary of the specified search job scope, including:
+ //
+ // - TotalBackupsToScanCount, the number of recovery points returned by the
+ // search.
+ //
+ // - TotalItemsToScanCount, the number of items returned by the search.
+ SearchScopeSummary *types.SearchScopeSummary
+
+ // A status message will be returned for either a earch job with a status of
+ // ERRORED or a status of COMPLETED jobs with issues.
+ //
+ // For example, a message may say that a search contained recovery points unable
+ // to be scanned because of a permissions issue.
+ StatusMessage *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationGetSearchJobMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpGetSearchJob{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpGetSearchJob{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "GetSearchJob"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpGetSearchJobValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opGetSearchJob(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opGetSearchJob(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "GetSearchJob",
+ }
+}
diff --git a/service/backupsearch/api_op_GetSearchResultExportJob.go b/service/backupsearch/api_op_GetSearchResultExportJob.go
new file mode 100644
index 00000000000..f5656d6b58d
--- /dev/null
+++ b/service/backupsearch/api_op_GetSearchResultExportJob.go
@@ -0,0 +1,198 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/backupsearch/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+ "time"
+)
+
+// This operation retrieves the metadata of an export job.
+//
+// An export job is an operation that transmits the results of a search job to a
+// specified S3 bucket in a .csv file.
+//
+// An export job allows you to retain results of a search beyond the search job's
+// scheduled retention of 7 days.
+func (c *Client) GetSearchResultExportJob(ctx context.Context, params *GetSearchResultExportJobInput, optFns ...func(*Options)) (*GetSearchResultExportJobOutput, error) {
+ if params == nil {
+ params = &GetSearchResultExportJobInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "GetSearchResultExportJob", params, optFns, c.addOperationGetSearchResultExportJobMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*GetSearchResultExportJobOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type GetSearchResultExportJobInput struct {
+
+ // This is the unique string that identifies a specific export job.
+ //
+ // Required for this operation.
+ //
+ // This member is required.
+ ExportJobIdentifier *string
+
+ noSmithyDocumentSerde
+}
+
+type GetSearchResultExportJobOutput struct {
+
+ // This is the unique string that identifies the specified export job.
+ //
+ // This member is required.
+ ExportJobIdentifier *string
+
+ // The date and time that an export job completed, in Unix format and Coordinated
+ // Universal Time (UTC). The value of CreationTime is accurate to milliseconds.
+ // For example, the value 1516925490.087 represents Friday, January 26, 2018
+ // 12:11:30.087 AM.
+ CompletionTime *time.Time
+
+ // The date and time that an export job was created, in Unix format and
+ // Coordinated Universal Time (UTC). The value of CreationTime is accurate to
+ // milliseconds. For example, the value 1516925490.087 represents Friday, January
+ // 26, 2018 12:11:30.087 AM.
+ CreationTime *time.Time
+
+ // The unique Amazon Resource Name (ARN) that uniquely identifies the export job.
+ ExportJobArn *string
+
+ // The export specification consists of the destination S3 bucket to which the
+ // search results were exported, along with the destination prefix.
+ ExportSpecification types.ExportSpecification
+
+ // The unique string that identifies the Amazon Resource Name (ARN) of the
+ // specified search job.
+ SearchJobArn *string
+
+ // This is the current status of the export job.
+ Status types.ExportJobStatus
+
+ // A status message is a string that is returned for search job with a status of
+ // FAILED , along with steps to remedy and retry the operation.
+ StatusMessage *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationGetSearchResultExportJobMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpGetSearchResultExportJob{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpGetSearchResultExportJob{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "GetSearchResultExportJob"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpGetSearchResultExportJobValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opGetSearchResultExportJob(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opGetSearchResultExportJob(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "GetSearchResultExportJob",
+ }
+}
diff --git a/service/backupsearch/api_op_ListSearchJobBackups.go b/service/backupsearch/api_op_ListSearchJobBackups.go
new file mode 100644
index 00000000000..595b5d17041
--- /dev/null
+++ b/service/backupsearch/api_op_ListSearchJobBackups.go
@@ -0,0 +1,282 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/backupsearch/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// This operation returns a list of all backups (recovery points) in a paginated
+// format that were included in the search job.
+//
+// If a search does not display an expected backup in the results, you can call
+// this operation to display each backup included in the search. Any backups that
+// were not included because they have a FAILED status from a permissions issue
+// will be displayed, along with a status message.
+//
+// Only recovery points with a backup index that has a status of ACTIVE will be
+// included in search results. If the index has any other status, its status will
+// be displayed along with a status message.
+func (c *Client) ListSearchJobBackups(ctx context.Context, params *ListSearchJobBackupsInput, optFns ...func(*Options)) (*ListSearchJobBackupsOutput, error) {
+ if params == nil {
+ params = &ListSearchJobBackupsInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "ListSearchJobBackups", params, optFns, c.addOperationListSearchJobBackupsMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*ListSearchJobBackupsOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type ListSearchJobBackupsInput struct {
+
+ // The unique string that specifies the search job.
+ //
+ // This member is required.
+ SearchJobIdentifier *string
+
+ // The maximum number of resource list items to be returned.
+ MaxResults *int32
+
+ // The next item following a partial list of returned backups included in a search
+ // job.
+ //
+ // For example, if a request is made to return MaxResults number of backups,
+ // NextToken allows you to return more items in your list starting at the location
+ // pointed to by the next token.
+ NextToken *string
+
+ noSmithyDocumentSerde
+}
+
+type ListSearchJobBackupsOutput struct {
+
+ // The recovery points returned the results of a search job
+ //
+ // This member is required.
+ Results []types.SearchJobBackupsResult
+
+ // The next item following a partial list of returned backups included in a search
+ // job.
+ //
+ // For example, if a request is made to return MaxResults number of backups,
+ // NextToken allows you to return more items in your list starting at the location
+ // pointed to by the next token.
+ NextToken *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationListSearchJobBackupsMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpListSearchJobBackups{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpListSearchJobBackups{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "ListSearchJobBackups"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpListSearchJobBackupsValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opListSearchJobBackups(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+// ListSearchJobBackupsPaginatorOptions is the paginator options for
+// ListSearchJobBackups
+type ListSearchJobBackupsPaginatorOptions struct {
+ // The maximum number of resource list items to be returned.
+ Limit int32
+
+ // Set to true if pagination should stop if the service returns a pagination token
+ // that matches the most recent token provided to the service.
+ StopOnDuplicateToken bool
+}
+
+// ListSearchJobBackupsPaginator is a paginator for ListSearchJobBackups
+type ListSearchJobBackupsPaginator struct {
+ options ListSearchJobBackupsPaginatorOptions
+ client ListSearchJobBackupsAPIClient
+ params *ListSearchJobBackupsInput
+ nextToken *string
+ firstPage bool
+}
+
+// NewListSearchJobBackupsPaginator returns a new ListSearchJobBackupsPaginator
+func NewListSearchJobBackupsPaginator(client ListSearchJobBackupsAPIClient, params *ListSearchJobBackupsInput, optFns ...func(*ListSearchJobBackupsPaginatorOptions)) *ListSearchJobBackupsPaginator {
+ if params == nil {
+ params = &ListSearchJobBackupsInput{}
+ }
+
+ options := ListSearchJobBackupsPaginatorOptions{}
+ if params.MaxResults != nil {
+ options.Limit = *params.MaxResults
+ }
+
+ for _, fn := range optFns {
+ fn(&options)
+ }
+
+ return &ListSearchJobBackupsPaginator{
+ options: options,
+ client: client,
+ params: params,
+ firstPage: true,
+ nextToken: params.NextToken,
+ }
+}
+
+// HasMorePages returns a boolean indicating whether more pages are available
+func (p *ListSearchJobBackupsPaginator) HasMorePages() bool {
+ return p.firstPage || (p.nextToken != nil && len(*p.nextToken) != 0)
+}
+
+// NextPage retrieves the next ListSearchJobBackups page.
+func (p *ListSearchJobBackupsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListSearchJobBackupsOutput, error) {
+ if !p.HasMorePages() {
+ return nil, fmt.Errorf("no more pages available")
+ }
+
+ params := *p.params
+ params.NextToken = p.nextToken
+
+ var limit *int32
+ if p.options.Limit > 0 {
+ limit = &p.options.Limit
+ }
+ params.MaxResults = limit
+
+ optFns = append([]func(*Options){
+ addIsPaginatorUserAgent,
+ }, optFns...)
+ result, err := p.client.ListSearchJobBackups(ctx, ¶ms, optFns...)
+ if err != nil {
+ return nil, err
+ }
+ p.firstPage = false
+
+ prevToken := p.nextToken
+ p.nextToken = result.NextToken
+
+ if p.options.StopOnDuplicateToken &&
+ prevToken != nil &&
+ p.nextToken != nil &&
+ *prevToken == *p.nextToken {
+ p.nextToken = nil
+ }
+
+ return result, nil
+}
+
+// ListSearchJobBackupsAPIClient is a client that implements the
+// ListSearchJobBackups operation.
+type ListSearchJobBackupsAPIClient interface {
+ ListSearchJobBackups(context.Context, *ListSearchJobBackupsInput, ...func(*Options)) (*ListSearchJobBackupsOutput, error)
+}
+
+var _ ListSearchJobBackupsAPIClient = (*Client)(nil)
+
+func newServiceMetadataMiddleware_opListSearchJobBackups(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "ListSearchJobBackups",
+ }
+}
diff --git a/service/backupsearch/api_op_ListSearchJobResults.go b/service/backupsearch/api_op_ListSearchJobResults.go
new file mode 100644
index 00000000000..9c092cb4192
--- /dev/null
+++ b/service/backupsearch/api_op_ListSearchJobResults.go
@@ -0,0 +1,270 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/backupsearch/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// This operation returns a list of a specified search job.
+func (c *Client) ListSearchJobResults(ctx context.Context, params *ListSearchJobResultsInput, optFns ...func(*Options)) (*ListSearchJobResultsOutput, error) {
+ if params == nil {
+ params = &ListSearchJobResultsInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "ListSearchJobResults", params, optFns, c.addOperationListSearchJobResultsMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*ListSearchJobResultsOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type ListSearchJobResultsInput struct {
+
+ // The unique string that specifies the search job.
+ //
+ // This member is required.
+ SearchJobIdentifier *string
+
+ // The maximum number of resource list items to be returned.
+ MaxResults *int32
+
+ // The next item following a partial list of returned search job results.
+ //
+ // For example, if a request is made to return MaxResults number of search job
+ // results, NextToken allows you to return more items in your list starting at the
+ // location pointed to by the next token.
+ NextToken *string
+
+ noSmithyDocumentSerde
+}
+
+type ListSearchJobResultsOutput struct {
+
+ // The results consist of either EBSResultItem or S3ResultItem.
+ //
+ // This member is required.
+ Results []types.ResultItem
+
+ // The next item following a partial list of search job results.
+ //
+ // For example, if a request is made to return MaxResults number of backups,
+ // NextToken allows you to return more items in your list starting at the location
+ // pointed to by the next token.
+ NextToken *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationListSearchJobResultsMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpListSearchJobResults{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpListSearchJobResults{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "ListSearchJobResults"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpListSearchJobResultsValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opListSearchJobResults(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+// ListSearchJobResultsPaginatorOptions is the paginator options for
+// ListSearchJobResults
+type ListSearchJobResultsPaginatorOptions struct {
+ // The maximum number of resource list items to be returned.
+ Limit int32
+
+ // Set to true if pagination should stop if the service returns a pagination token
+ // that matches the most recent token provided to the service.
+ StopOnDuplicateToken bool
+}
+
+// ListSearchJobResultsPaginator is a paginator for ListSearchJobResults
+type ListSearchJobResultsPaginator struct {
+ options ListSearchJobResultsPaginatorOptions
+ client ListSearchJobResultsAPIClient
+ params *ListSearchJobResultsInput
+ nextToken *string
+ firstPage bool
+}
+
+// NewListSearchJobResultsPaginator returns a new ListSearchJobResultsPaginator
+func NewListSearchJobResultsPaginator(client ListSearchJobResultsAPIClient, params *ListSearchJobResultsInput, optFns ...func(*ListSearchJobResultsPaginatorOptions)) *ListSearchJobResultsPaginator {
+ if params == nil {
+ params = &ListSearchJobResultsInput{}
+ }
+
+ options := ListSearchJobResultsPaginatorOptions{}
+ if params.MaxResults != nil {
+ options.Limit = *params.MaxResults
+ }
+
+ for _, fn := range optFns {
+ fn(&options)
+ }
+
+ return &ListSearchJobResultsPaginator{
+ options: options,
+ client: client,
+ params: params,
+ firstPage: true,
+ nextToken: params.NextToken,
+ }
+}
+
+// HasMorePages returns a boolean indicating whether more pages are available
+func (p *ListSearchJobResultsPaginator) HasMorePages() bool {
+ return p.firstPage || (p.nextToken != nil && len(*p.nextToken) != 0)
+}
+
+// NextPage retrieves the next ListSearchJobResults page.
+func (p *ListSearchJobResultsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListSearchJobResultsOutput, error) {
+ if !p.HasMorePages() {
+ return nil, fmt.Errorf("no more pages available")
+ }
+
+ params := *p.params
+ params.NextToken = p.nextToken
+
+ var limit *int32
+ if p.options.Limit > 0 {
+ limit = &p.options.Limit
+ }
+ params.MaxResults = limit
+
+ optFns = append([]func(*Options){
+ addIsPaginatorUserAgent,
+ }, optFns...)
+ result, err := p.client.ListSearchJobResults(ctx, ¶ms, optFns...)
+ if err != nil {
+ return nil, err
+ }
+ p.firstPage = false
+
+ prevToken := p.nextToken
+ p.nextToken = result.NextToken
+
+ if p.options.StopOnDuplicateToken &&
+ prevToken != nil &&
+ p.nextToken != nil &&
+ *prevToken == *p.nextToken {
+ p.nextToken = nil
+ }
+
+ return result, nil
+}
+
+// ListSearchJobResultsAPIClient is a client that implements the
+// ListSearchJobResults operation.
+type ListSearchJobResultsAPIClient interface {
+ ListSearchJobResults(context.Context, *ListSearchJobResultsInput, ...func(*Options)) (*ListSearchJobResultsOutput, error)
+}
+
+var _ ListSearchJobResultsAPIClient = (*Client)(nil)
+
+func newServiceMetadataMiddleware_opListSearchJobResults(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "ListSearchJobResults",
+ }
+}
diff --git a/service/backupsearch/api_op_ListSearchJobs.go b/service/backupsearch/api_op_ListSearchJobs.go
new file mode 100644
index 00000000000..2f85a174657
--- /dev/null
+++ b/service/backupsearch/api_op_ListSearchJobs.go
@@ -0,0 +1,265 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/backupsearch/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// This operation returns a list of search jobs belonging to an account.
+func (c *Client) ListSearchJobs(ctx context.Context, params *ListSearchJobsInput, optFns ...func(*Options)) (*ListSearchJobsOutput, error) {
+ if params == nil {
+ params = &ListSearchJobsInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "ListSearchJobs", params, optFns, c.addOperationListSearchJobsMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*ListSearchJobsOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type ListSearchJobsInput struct {
+
+ // Include this parameter to filter list by search job status.
+ ByStatus types.SearchJobState
+
+ // The maximum number of resource list items to be returned.
+ MaxResults *int32
+
+ // The next item following a partial list of returned search jobs.
+ //
+ // For example, if a request is made to return MaxResults number of backups,
+ // NextToken allows you to return more items in your list starting at the location
+ // pointed to by the next token.
+ NextToken *string
+
+ noSmithyDocumentSerde
+}
+
+type ListSearchJobsOutput struct {
+
+ // The search jobs among the list, with details of the returned search jobs.
+ //
+ // This member is required.
+ SearchJobs []types.SearchJobSummary
+
+ // The next item following a partial list of returned backups included in a search
+ // job.
+ //
+ // For example, if a request is made to return MaxResults number of backups,
+ // NextToken allows you to return more items in your list starting at the location
+ // pointed to by the next token.
+ NextToken *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationListSearchJobsMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpListSearchJobs{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpListSearchJobs{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "ListSearchJobs"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opListSearchJobs(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+// ListSearchJobsPaginatorOptions is the paginator options for ListSearchJobs
+type ListSearchJobsPaginatorOptions struct {
+ // The maximum number of resource list items to be returned.
+ Limit int32
+
+ // Set to true if pagination should stop if the service returns a pagination token
+ // that matches the most recent token provided to the service.
+ StopOnDuplicateToken bool
+}
+
+// ListSearchJobsPaginator is a paginator for ListSearchJobs
+type ListSearchJobsPaginator struct {
+ options ListSearchJobsPaginatorOptions
+ client ListSearchJobsAPIClient
+ params *ListSearchJobsInput
+ nextToken *string
+ firstPage bool
+}
+
+// NewListSearchJobsPaginator returns a new ListSearchJobsPaginator
+func NewListSearchJobsPaginator(client ListSearchJobsAPIClient, params *ListSearchJobsInput, optFns ...func(*ListSearchJobsPaginatorOptions)) *ListSearchJobsPaginator {
+ if params == nil {
+ params = &ListSearchJobsInput{}
+ }
+
+ options := ListSearchJobsPaginatorOptions{}
+ if params.MaxResults != nil {
+ options.Limit = *params.MaxResults
+ }
+
+ for _, fn := range optFns {
+ fn(&options)
+ }
+
+ return &ListSearchJobsPaginator{
+ options: options,
+ client: client,
+ params: params,
+ firstPage: true,
+ nextToken: params.NextToken,
+ }
+}
+
+// HasMorePages returns a boolean indicating whether more pages are available
+func (p *ListSearchJobsPaginator) HasMorePages() bool {
+ return p.firstPage || (p.nextToken != nil && len(*p.nextToken) != 0)
+}
+
+// NextPage retrieves the next ListSearchJobs page.
+func (p *ListSearchJobsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListSearchJobsOutput, error) {
+ if !p.HasMorePages() {
+ return nil, fmt.Errorf("no more pages available")
+ }
+
+ params := *p.params
+ params.NextToken = p.nextToken
+
+ var limit *int32
+ if p.options.Limit > 0 {
+ limit = &p.options.Limit
+ }
+ params.MaxResults = limit
+
+ optFns = append([]func(*Options){
+ addIsPaginatorUserAgent,
+ }, optFns...)
+ result, err := p.client.ListSearchJobs(ctx, ¶ms, optFns...)
+ if err != nil {
+ return nil, err
+ }
+ p.firstPage = false
+
+ prevToken := p.nextToken
+ p.nextToken = result.NextToken
+
+ if p.options.StopOnDuplicateToken &&
+ prevToken != nil &&
+ p.nextToken != nil &&
+ *prevToken == *p.nextToken {
+ p.nextToken = nil
+ }
+
+ return result, nil
+}
+
+// ListSearchJobsAPIClient is a client that implements the ListSearchJobs
+// operation.
+type ListSearchJobsAPIClient interface {
+ ListSearchJobs(context.Context, *ListSearchJobsInput, ...func(*Options)) (*ListSearchJobsOutput, error)
+}
+
+var _ ListSearchJobsAPIClient = (*Client)(nil)
+
+func newServiceMetadataMiddleware_opListSearchJobs(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "ListSearchJobs",
+ }
+}
diff --git a/service/backupsearch/api_op_ListSearchResultExportJobs.go b/service/backupsearch/api_op_ListSearchResultExportJobs.go
new file mode 100644
index 00000000000..4d4a116c566
--- /dev/null
+++ b/service/backupsearch/api_op_ListSearchResultExportJobs.go
@@ -0,0 +1,274 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/backupsearch/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// This operation exports search results of a search job to a specified
+// destination S3 bucket.
+func (c *Client) ListSearchResultExportJobs(ctx context.Context, params *ListSearchResultExportJobsInput, optFns ...func(*Options)) (*ListSearchResultExportJobsOutput, error) {
+ if params == nil {
+ params = &ListSearchResultExportJobsInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "ListSearchResultExportJobs", params, optFns, c.addOperationListSearchResultExportJobsMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*ListSearchResultExportJobsOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type ListSearchResultExportJobsInput struct {
+
+ // The maximum number of resource list items to be returned.
+ MaxResults *int32
+
+ // The next item following a partial list of returned backups included in a search
+ // job.
+ //
+ // For example, if a request is made to return MaxResults number of backups,
+ // NextToken allows you to return more items in your list starting at the location
+ // pointed to by the next token.
+ NextToken *string
+
+ // The unique string that specifies the search job.
+ SearchJobIdentifier *string
+
+ // The search jobs to be included in the export job can be filtered by including
+ // this parameter.
+ Status types.ExportJobStatus
+
+ noSmithyDocumentSerde
+}
+
+type ListSearchResultExportJobsOutput struct {
+
+ // The operation returns the included export jobs.
+ //
+ // This member is required.
+ ExportJobs []types.ExportJobSummary
+
+ // The next item following a partial list of returned backups included in a search
+ // job.
+ //
+ // For example, if a request is made to return MaxResults number of backups,
+ // NextToken allows you to return more items in your list starting at the location
+ // pointed to by the next token.
+ NextToken *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationListSearchResultExportJobsMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpListSearchResultExportJobs{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpListSearchResultExportJobs{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "ListSearchResultExportJobs"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opListSearchResultExportJobs(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+// ListSearchResultExportJobsPaginatorOptions is the paginator options for
+// ListSearchResultExportJobs
+type ListSearchResultExportJobsPaginatorOptions struct {
+ // The maximum number of resource list items to be returned.
+ Limit int32
+
+ // Set to true if pagination should stop if the service returns a pagination token
+ // that matches the most recent token provided to the service.
+ StopOnDuplicateToken bool
+}
+
+// ListSearchResultExportJobsPaginator is a paginator for
+// ListSearchResultExportJobs
+type ListSearchResultExportJobsPaginator struct {
+ options ListSearchResultExportJobsPaginatorOptions
+ client ListSearchResultExportJobsAPIClient
+ params *ListSearchResultExportJobsInput
+ nextToken *string
+ firstPage bool
+}
+
+// NewListSearchResultExportJobsPaginator returns a new
+// ListSearchResultExportJobsPaginator
+func NewListSearchResultExportJobsPaginator(client ListSearchResultExportJobsAPIClient, params *ListSearchResultExportJobsInput, optFns ...func(*ListSearchResultExportJobsPaginatorOptions)) *ListSearchResultExportJobsPaginator {
+ if params == nil {
+ params = &ListSearchResultExportJobsInput{}
+ }
+
+ options := ListSearchResultExportJobsPaginatorOptions{}
+ if params.MaxResults != nil {
+ options.Limit = *params.MaxResults
+ }
+
+ for _, fn := range optFns {
+ fn(&options)
+ }
+
+ return &ListSearchResultExportJobsPaginator{
+ options: options,
+ client: client,
+ params: params,
+ firstPage: true,
+ nextToken: params.NextToken,
+ }
+}
+
+// HasMorePages returns a boolean indicating whether more pages are available
+func (p *ListSearchResultExportJobsPaginator) HasMorePages() bool {
+ return p.firstPage || (p.nextToken != nil && len(*p.nextToken) != 0)
+}
+
+// NextPage retrieves the next ListSearchResultExportJobs page.
+func (p *ListSearchResultExportJobsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListSearchResultExportJobsOutput, error) {
+ if !p.HasMorePages() {
+ return nil, fmt.Errorf("no more pages available")
+ }
+
+ params := *p.params
+ params.NextToken = p.nextToken
+
+ var limit *int32
+ if p.options.Limit > 0 {
+ limit = &p.options.Limit
+ }
+ params.MaxResults = limit
+
+ optFns = append([]func(*Options){
+ addIsPaginatorUserAgent,
+ }, optFns...)
+ result, err := p.client.ListSearchResultExportJobs(ctx, ¶ms, optFns...)
+ if err != nil {
+ return nil, err
+ }
+ p.firstPage = false
+
+ prevToken := p.nextToken
+ p.nextToken = result.NextToken
+
+ if p.options.StopOnDuplicateToken &&
+ prevToken != nil &&
+ p.nextToken != nil &&
+ *prevToken == *p.nextToken {
+ p.nextToken = nil
+ }
+
+ return result, nil
+}
+
+// ListSearchResultExportJobsAPIClient is a client that implements the
+// ListSearchResultExportJobs operation.
+type ListSearchResultExportJobsAPIClient interface {
+ ListSearchResultExportJobs(context.Context, *ListSearchResultExportJobsInput, ...func(*Options)) (*ListSearchResultExportJobsOutput, error)
+}
+
+var _ ListSearchResultExportJobsAPIClient = (*Client)(nil)
+
+func newServiceMetadataMiddleware_opListSearchResultExportJobs(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "ListSearchResultExportJobs",
+ }
+}
diff --git a/service/backupsearch/api_op_ListTagsForResource.go b/service/backupsearch/api_op_ListTagsForResource.go
new file mode 100644
index 00000000000..c8276a77903
--- /dev/null
+++ b/service/backupsearch/api_op_ListTagsForResource.go
@@ -0,0 +1,156 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// This operation returns the tags for a resource type.
+func (c *Client) ListTagsForResource(ctx context.Context, params *ListTagsForResourceInput, optFns ...func(*Options)) (*ListTagsForResourceOutput, error) {
+ if params == nil {
+ params = &ListTagsForResourceInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "ListTagsForResource", params, optFns, c.addOperationListTagsForResourceMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*ListTagsForResourceOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type ListTagsForResourceInput struct {
+
+ // The Amazon Resource Name (ARN) that uniquely identifies the resource.>
+ //
+ // This member is required.
+ ResourceArn *string
+
+ noSmithyDocumentSerde
+}
+
+type ListTagsForResourceOutput struct {
+
+ // List of tags returned by the operation.
+ Tags map[string]*string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationListTagsForResourceMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpListTagsForResource{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpListTagsForResource{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "ListTagsForResource"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpListTagsForResourceValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opListTagsForResource(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opListTagsForResource(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "ListTagsForResource",
+ }
+}
diff --git a/service/backupsearch/api_op_StartSearchJob.go b/service/backupsearch/api_op_StartSearchJob.go
new file mode 100644
index 00000000000..3ab4ee1d16c
--- /dev/null
+++ b/service/backupsearch/api_op_StartSearchJob.go
@@ -0,0 +1,195 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/backupsearch/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+ "time"
+)
+
+// This operation creates a search job which returns recovery points filtered by
+// SearchScope and items filtered by ItemFilters.
+//
+// You can optionally include ClientToken, EncryptionKeyArn, Name, and/or Tags.
+func (c *Client) StartSearchJob(ctx context.Context, params *StartSearchJobInput, optFns ...func(*Options)) (*StartSearchJobOutput, error) {
+ if params == nil {
+ params = &StartSearchJobInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "StartSearchJob", params, optFns, c.addOperationStartSearchJobMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*StartSearchJobOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type StartSearchJobInput struct {
+
+ // This object can contain BackupResourceTypes, BackupResourceArns,
+ // BackupResourceCreationTime, BackupResourceTags, and SourceResourceArns to filter
+ // the recovery points returned by the search job.
+ //
+ // This member is required.
+ SearchScope *types.SearchScope
+
+ // Include this parameter to allow multiple identical calls for idempotency.
+ //
+ // A client token is valid for 8 hours after the first request that uses it is
+ // completed. After this time, any request with the same token is treated as a new
+ // request.
+ ClientToken *string
+
+ // The encryption key for the specified search job.
+ EncryptionKeyArn *string
+
+ // Item Filters represent all input item properties specified when the search was
+ // created.
+ //
+ // Contains either EBSItemFilters or S3ItemFilters
+ ItemFilters *types.ItemFilters
+
+ // Include alphanumeric characters to create a name for this search job.
+ Name *string
+
+ // List of tags returned by the operation.
+ Tags map[string]*string
+
+ noSmithyDocumentSerde
+}
+
+type StartSearchJobOutput struct {
+
+ // The date and time that a job was created, in Unix format and Coordinated
+ // Universal Time (UTC). The value of CompletionTime is accurate to milliseconds.
+ // For example, the value 1516925490.087 represents Friday, January 26, 2018
+ // 12:11:30.087 AM.
+ CreationTime *time.Time
+
+ // The unique string that identifies the Amazon Resource Name (ARN) of the
+ // specified search job.
+ SearchJobArn *string
+
+ // The unique string that specifies the search job.
+ SearchJobIdentifier *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationStartSearchJobMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpStartSearchJob{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpStartSearchJob{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "StartSearchJob"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpStartSearchJobValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opStartSearchJob(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opStartSearchJob(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "StartSearchJob",
+ }
+}
diff --git a/service/backupsearch/api_op_StartSearchResultExportJob.go b/service/backupsearch/api_op_StartSearchResultExportJob.go
new file mode 100644
index 00000000000..b31df6dba17
--- /dev/null
+++ b/service/backupsearch/api_op_StartSearchResultExportJob.go
@@ -0,0 +1,186 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/backupsearch/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// This operations starts a job to export the results of search job to a
+// designated S3 bucket.
+func (c *Client) StartSearchResultExportJob(ctx context.Context, params *StartSearchResultExportJobInput, optFns ...func(*Options)) (*StartSearchResultExportJobOutput, error) {
+ if params == nil {
+ params = &StartSearchResultExportJobInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "StartSearchResultExportJob", params, optFns, c.addOperationStartSearchResultExportJobMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*StartSearchResultExportJobOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type StartSearchResultExportJobInput struct {
+
+ // This specification contains a required string of the destination bucket;
+ // optionally, you can include the destination prefix.
+ //
+ // This member is required.
+ ExportSpecification types.ExportSpecification
+
+ // The unique string that specifies the search job.
+ //
+ // This member is required.
+ SearchJobIdentifier *string
+
+ // Include this parameter to allow multiple identical calls for idempotency.
+ //
+ // A client token is valid for 8 hours after the first request that uses it is
+ // completed. After this time, any request with the same token is treated as a new
+ // request.
+ ClientToken *string
+
+ // This parameter specifies the role ARN used to start the search results export
+ // jobs.
+ RoleArn *string
+
+ // Optional tags to include. A tag is a key-value pair you can use to manage,
+ // filter, and search for your resources. Allowed characters include UTF-8 letters,
+ // numbers, spaces, and the following characters: + - = . _ : /.
+ Tags map[string]*string
+
+ noSmithyDocumentSerde
+}
+
+type StartSearchResultExportJobOutput struct {
+
+ // This is the unique identifier that specifies the new export job.
+ //
+ // This member is required.
+ ExportJobIdentifier *string
+
+ // This is the unique ARN (Amazon Resource Name) that belongs to the new export
+ // job.
+ ExportJobArn *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationStartSearchResultExportJobMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpStartSearchResultExportJob{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpStartSearchResultExportJob{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "StartSearchResultExportJob"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpStartSearchResultExportJobValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opStartSearchResultExportJob(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opStartSearchResultExportJob(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "StartSearchResultExportJob",
+ }
+}
diff --git a/service/backupsearch/api_op_StopSearchJob.go b/service/backupsearch/api_op_StopSearchJob.go
new file mode 100644
index 00000000000..dc6b1d936fa
--- /dev/null
+++ b/service/backupsearch/api_op_StopSearchJob.go
@@ -0,0 +1,154 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// This operations ends a search job.
+//
+// Only a search job with a status of RUNNING can be stopped.
+func (c *Client) StopSearchJob(ctx context.Context, params *StopSearchJobInput, optFns ...func(*Options)) (*StopSearchJobOutput, error) {
+ if params == nil {
+ params = &StopSearchJobInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "StopSearchJob", params, optFns, c.addOperationStopSearchJobMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*StopSearchJobOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type StopSearchJobInput struct {
+
+ // The unique string that specifies the search job.
+ //
+ // This member is required.
+ SearchJobIdentifier *string
+
+ noSmithyDocumentSerde
+}
+
+type StopSearchJobOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationStopSearchJobMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpStopSearchJob{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpStopSearchJob{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "StopSearchJob"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpStopSearchJobValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opStopSearchJob(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opStopSearchJob(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "StopSearchJob",
+ }
+}
diff --git a/service/backupsearch/api_op_TagResource.go b/service/backupsearch/api_op_TagResource.go
new file mode 100644
index 00000000000..19cd977b333
--- /dev/null
+++ b/service/backupsearch/api_op_TagResource.go
@@ -0,0 +1,161 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// This operation puts tags on the resource you indicate.
+func (c *Client) TagResource(ctx context.Context, params *TagResourceInput, optFns ...func(*Options)) (*TagResourceOutput, error) {
+ if params == nil {
+ params = &TagResourceInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "TagResource", params, optFns, c.addOperationTagResourceMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*TagResourceOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type TagResourceInput struct {
+
+ // The Amazon Resource Name (ARN) that uniquely identifies the resource.
+ //
+ // This is the resource that will have the indicated tags.
+ //
+ // This member is required.
+ ResourceArn *string
+
+ // Required tags to include. A tag is a key-value pair you can use to manage,
+ // filter, and search for your resources. Allowed characters include UTF-8 letters,
+ // numbers, spaces, and the following characters: + - = . _ : /.
+ //
+ // This member is required.
+ Tags map[string]*string
+
+ noSmithyDocumentSerde
+}
+
+type TagResourceOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationTagResourceMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpTagResource{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpTagResource{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "TagResource"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpTagResourceValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opTagResource(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opTagResource(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "TagResource",
+ }
+}
diff --git a/service/backupsearch/api_op_UntagResource.go b/service/backupsearch/api_op_UntagResource.go
new file mode 100644
index 00000000000..10f0aa09fae
--- /dev/null
+++ b/service/backupsearch/api_op_UntagResource.go
@@ -0,0 +1,159 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// This operation removes tags from the specified resource.
+func (c *Client) UntagResource(ctx context.Context, params *UntagResourceInput, optFns ...func(*Options)) (*UntagResourceOutput, error) {
+ if params == nil {
+ params = &UntagResourceInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "UntagResource", params, optFns, c.addOperationUntagResourceMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*UntagResourceOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type UntagResourceInput struct {
+
+ // The Amazon Resource Name (ARN) that uniquely identifies the resource where you
+ // want to remove tags.
+ //
+ // This member is required.
+ ResourceArn *string
+
+ // This required parameter contains the tag keys you want to remove from the
+ // source.
+ //
+ // This member is required.
+ TagKeys []string
+
+ noSmithyDocumentSerde
+}
+
+type UntagResourceOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationUntagResourceMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpUntagResource{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpUntagResource{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "UntagResource"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpUntagResourceValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opUntagResource(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opUntagResource(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "UntagResource",
+ }
+}
diff --git a/service/backupsearch/auth.go b/service/backupsearch/auth.go
new file mode 100644
index 00000000000..87108223831
--- /dev/null
+++ b/service/backupsearch/auth.go
@@ -0,0 +1,313 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ smithy "github.com/aws/smithy-go"
+ smithyauth "github.com/aws/smithy-go/auth"
+ "github.com/aws/smithy-go/metrics"
+ "github.com/aws/smithy-go/middleware"
+ "github.com/aws/smithy-go/tracing"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+func bindAuthParamsRegion(_ interface{}, params *AuthResolverParameters, _ interface{}, options Options) {
+ params.Region = options.Region
+}
+
+type setLegacyContextSigningOptionsMiddleware struct {
+}
+
+func (*setLegacyContextSigningOptionsMiddleware) ID() string {
+ return "setLegacyContextSigningOptions"
+}
+
+func (m *setLegacyContextSigningOptionsMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+ out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+ rscheme := getResolvedAuthScheme(ctx)
+ schemeID := rscheme.Scheme.SchemeID()
+
+ if sn := awsmiddleware.GetSigningName(ctx); sn != "" {
+ if schemeID == "aws.auth#sigv4" {
+ smithyhttp.SetSigV4SigningName(&rscheme.SignerProperties, sn)
+ } else if schemeID == "aws.auth#sigv4a" {
+ smithyhttp.SetSigV4ASigningName(&rscheme.SignerProperties, sn)
+ }
+ }
+
+ if sr := awsmiddleware.GetSigningRegion(ctx); sr != "" {
+ if schemeID == "aws.auth#sigv4" {
+ smithyhttp.SetSigV4SigningRegion(&rscheme.SignerProperties, sr)
+ } else if schemeID == "aws.auth#sigv4a" {
+ smithyhttp.SetSigV4ASigningRegions(&rscheme.SignerProperties, []string{sr})
+ }
+ }
+
+ return next.HandleFinalize(ctx, in)
+}
+
+func addSetLegacyContextSigningOptionsMiddleware(stack *middleware.Stack) error {
+ return stack.Finalize.Insert(&setLegacyContextSigningOptionsMiddleware{}, "Signing", middleware.Before)
+}
+
+type withAnonymous struct {
+ resolver AuthSchemeResolver
+}
+
+var _ AuthSchemeResolver = (*withAnonymous)(nil)
+
+func (v *withAnonymous) ResolveAuthSchemes(ctx context.Context, params *AuthResolverParameters) ([]*smithyauth.Option, error) {
+ opts, err := v.resolver.ResolveAuthSchemes(ctx, params)
+ if err != nil {
+ return nil, err
+ }
+
+ opts = append(opts, &smithyauth.Option{
+ SchemeID: smithyauth.SchemeIDAnonymous,
+ })
+ return opts, nil
+}
+
+func wrapWithAnonymousAuth(options *Options) {
+ if _, ok := options.AuthSchemeResolver.(*defaultAuthSchemeResolver); !ok {
+ return
+ }
+
+ options.AuthSchemeResolver = &withAnonymous{
+ resolver: options.AuthSchemeResolver,
+ }
+}
+
+// AuthResolverParameters contains the set of inputs necessary for auth scheme
+// resolution.
+type AuthResolverParameters struct {
+ // The name of the operation being invoked.
+ Operation string
+
+ // The region in which the operation is being invoked.
+ Region string
+}
+
+func bindAuthResolverParams(ctx context.Context, operation string, input interface{}, options Options) *AuthResolverParameters {
+ params := &AuthResolverParameters{
+ Operation: operation,
+ }
+
+ bindAuthParamsRegion(ctx, params, input, options)
+
+ return params
+}
+
+// AuthSchemeResolver returns a set of possible authentication options for an
+// operation.
+type AuthSchemeResolver interface {
+ ResolveAuthSchemes(context.Context, *AuthResolverParameters) ([]*smithyauth.Option, error)
+}
+
+type defaultAuthSchemeResolver struct{}
+
+var _ AuthSchemeResolver = (*defaultAuthSchemeResolver)(nil)
+
+func (*defaultAuthSchemeResolver) ResolveAuthSchemes(ctx context.Context, params *AuthResolverParameters) ([]*smithyauth.Option, error) {
+ if overrides, ok := operationAuthOptions[params.Operation]; ok {
+ return overrides(params), nil
+ }
+ return serviceAuthOptions(params), nil
+}
+
+var operationAuthOptions = map[string]func(*AuthResolverParameters) []*smithyauth.Option{}
+
+func serviceAuthOptions(params *AuthResolverParameters) []*smithyauth.Option {
+ return []*smithyauth.Option{
+ {
+ SchemeID: smithyauth.SchemeIDSigV4,
+ SignerProperties: func() smithy.Properties {
+ var props smithy.Properties
+ smithyhttp.SetSigV4SigningName(&props, "backup-search")
+ smithyhttp.SetSigV4SigningRegion(&props, params.Region)
+ return props
+ }(),
+ },
+ }
+}
+
+type resolveAuthSchemeMiddleware struct {
+ operation string
+ options Options
+}
+
+func (*resolveAuthSchemeMiddleware) ID() string {
+ return "ResolveAuthScheme"
+}
+
+func (m *resolveAuthSchemeMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+ out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "ResolveAuthScheme")
+ defer span.End()
+
+ params := bindAuthResolverParams(ctx, m.operation, getOperationInput(ctx), m.options)
+ options, err := m.options.AuthSchemeResolver.ResolveAuthSchemes(ctx, params)
+ if err != nil {
+ return out, metadata, fmt.Errorf("resolve auth scheme: %w", err)
+ }
+
+ scheme, ok := m.selectScheme(options)
+ if !ok {
+ return out, metadata, fmt.Errorf("could not select an auth scheme")
+ }
+
+ ctx = setResolvedAuthScheme(ctx, scheme)
+
+ span.SetProperty("auth.scheme_id", scheme.Scheme.SchemeID())
+ span.End()
+ return next.HandleFinalize(ctx, in)
+}
+
+func (m *resolveAuthSchemeMiddleware) selectScheme(options []*smithyauth.Option) (*resolvedAuthScheme, bool) {
+ for _, option := range options {
+ if option.SchemeID == smithyauth.SchemeIDAnonymous {
+ return newResolvedAuthScheme(smithyhttp.NewAnonymousScheme(), option), true
+ }
+
+ for _, scheme := range m.options.AuthSchemes {
+ if scheme.SchemeID() != option.SchemeID {
+ continue
+ }
+
+ if scheme.IdentityResolver(m.options) != nil {
+ return newResolvedAuthScheme(scheme, option), true
+ }
+ }
+ }
+
+ return nil, false
+}
+
+type resolvedAuthSchemeKey struct{}
+
+type resolvedAuthScheme struct {
+ Scheme smithyhttp.AuthScheme
+ IdentityProperties smithy.Properties
+ SignerProperties smithy.Properties
+}
+
+func newResolvedAuthScheme(scheme smithyhttp.AuthScheme, option *smithyauth.Option) *resolvedAuthScheme {
+ return &resolvedAuthScheme{
+ Scheme: scheme,
+ IdentityProperties: option.IdentityProperties,
+ SignerProperties: option.SignerProperties,
+ }
+}
+
+func setResolvedAuthScheme(ctx context.Context, scheme *resolvedAuthScheme) context.Context {
+ return middleware.WithStackValue(ctx, resolvedAuthSchemeKey{}, scheme)
+}
+
+func getResolvedAuthScheme(ctx context.Context) *resolvedAuthScheme {
+ v, _ := middleware.GetStackValue(ctx, resolvedAuthSchemeKey{}).(*resolvedAuthScheme)
+ return v
+}
+
+type getIdentityMiddleware struct {
+ options Options
+}
+
+func (*getIdentityMiddleware) ID() string {
+ return "GetIdentity"
+}
+
+func (m *getIdentityMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+ out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+ innerCtx, span := tracing.StartSpan(ctx, "GetIdentity")
+ defer span.End()
+
+ rscheme := getResolvedAuthScheme(innerCtx)
+ if rscheme == nil {
+ return out, metadata, fmt.Errorf("no resolved auth scheme")
+ }
+
+ resolver := rscheme.Scheme.IdentityResolver(m.options)
+ if resolver == nil {
+ return out, metadata, fmt.Errorf("no identity resolver")
+ }
+
+ identity, err := timeOperationMetric(ctx, "client.call.resolve_identity_duration",
+ func() (smithyauth.Identity, error) {
+ return resolver.GetIdentity(innerCtx, rscheme.IdentityProperties)
+ },
+ func(o *metrics.RecordMetricOptions) {
+ o.Properties.Set("auth.scheme_id", rscheme.Scheme.SchemeID())
+ })
+ if err != nil {
+ return out, metadata, fmt.Errorf("get identity: %w", err)
+ }
+
+ ctx = setIdentity(ctx, identity)
+
+ span.End()
+ return next.HandleFinalize(ctx, in)
+}
+
+type identityKey struct{}
+
+func setIdentity(ctx context.Context, identity smithyauth.Identity) context.Context {
+ return middleware.WithStackValue(ctx, identityKey{}, identity)
+}
+
+func getIdentity(ctx context.Context) smithyauth.Identity {
+ v, _ := middleware.GetStackValue(ctx, identityKey{}).(smithyauth.Identity)
+ return v
+}
+
+type signRequestMiddleware struct {
+ options Options
+}
+
+func (*signRequestMiddleware) ID() string {
+ return "Signing"
+}
+
+func (m *signRequestMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+ out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "SignRequest")
+ defer span.End()
+
+ req, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, fmt.Errorf("unexpected transport type %T", in.Request)
+ }
+
+ rscheme := getResolvedAuthScheme(ctx)
+ if rscheme == nil {
+ return out, metadata, fmt.Errorf("no resolved auth scheme")
+ }
+
+ identity := getIdentity(ctx)
+ if identity == nil {
+ return out, metadata, fmt.Errorf("no identity")
+ }
+
+ signer := rscheme.Scheme.Signer()
+ if signer == nil {
+ return out, metadata, fmt.Errorf("no signer")
+ }
+
+ _, err = timeOperationMetric(ctx, "client.call.signing_duration", func() (any, error) {
+ return nil, signer.SignRequest(ctx, req, identity, rscheme.SignerProperties)
+ }, func(o *metrics.RecordMetricOptions) {
+ o.Properties.Set("auth.scheme_id", rscheme.Scheme.SchemeID())
+ })
+ if err != nil {
+ return out, metadata, fmt.Errorf("sign request: %w", err)
+ }
+
+ span.End()
+ return next.HandleFinalize(ctx, in)
+}
diff --git a/service/backupsearch/deserializers.go b/service/backupsearch/deserializers.go
new file mode 100644
index 00000000000..fecd0cc83e5
--- /dev/null
+++ b/service/backupsearch/deserializers.go
@@ -0,0 +1,4466 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "bytes"
+ "context"
+ "encoding/json"
+ "fmt"
+ "github.com/aws/aws-sdk-go-v2/aws/protocol/restjson"
+ "github.com/aws/aws-sdk-go-v2/service/backupsearch/types"
+ smithy "github.com/aws/smithy-go"
+ smithyio "github.com/aws/smithy-go/io"
+ "github.com/aws/smithy-go/middleware"
+ "github.com/aws/smithy-go/ptr"
+ smithytime "github.com/aws/smithy-go/time"
+ "github.com/aws/smithy-go/tracing"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+ "io"
+ "strconv"
+ "strings"
+ "time"
+)
+
+func deserializeS3Expires(v string) (*time.Time, error) {
+ t, err := smithytime.ParseHTTPDate(v)
+ if err != nil {
+ return nil, nil
+ }
+ return &t, nil
+}
+
+type awsRestjson1_deserializeOpGetSearchJob struct {
+}
+
+func (*awsRestjson1_deserializeOpGetSearchJob) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpGetSearchJob) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorGetSearchJob(response, &metadata)
+ }
+ output := &GetSearchJobOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentGetSearchJobOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorGetSearchJob(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentGetSearchJobOutput(v **GetSearchJobOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *GetSearchJobOutput
+ if *v == nil {
+ sv = &GetSearchJobOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "CompletionTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.CompletionTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "CreationTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.CreationTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "CurrentSearchProgress":
+ if err := awsRestjson1_deserializeDocumentCurrentSearchProgress(&sv.CurrentSearchProgress, value); err != nil {
+ return err
+ }
+
+ case "EncryptionKeyArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected EncryptionKeyArn to be of type string, got %T instead", value)
+ }
+ sv.EncryptionKeyArn = ptr.String(jtv)
+ }
+
+ case "ItemFilters":
+ if err := awsRestjson1_deserializeDocumentItemFilters(&sv.ItemFilters, value); err != nil {
+ return err
+ }
+
+ case "Name":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.Name = ptr.String(jtv)
+ }
+
+ case "SearchJobArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected SearchJobArn to be of type string, got %T instead", value)
+ }
+ sv.SearchJobArn = ptr.String(jtv)
+ }
+
+ case "SearchJobIdentifier":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected GenericId to be of type string, got %T instead", value)
+ }
+ sv.SearchJobIdentifier = ptr.String(jtv)
+ }
+
+ case "SearchScope":
+ if err := awsRestjson1_deserializeDocumentSearchScope(&sv.SearchScope, value); err != nil {
+ return err
+ }
+
+ case "SearchScopeSummary":
+ if err := awsRestjson1_deserializeDocumentSearchScopeSummary(&sv.SearchScopeSummary, value); err != nil {
+ return err
+ }
+
+ case "Status":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected SearchJobState to be of type string, got %T instead", value)
+ }
+ sv.Status = types.SearchJobState(jtv)
+ }
+
+ case "StatusMessage":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.StatusMessage = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+type awsRestjson1_deserializeOpGetSearchResultExportJob struct {
+}
+
+func (*awsRestjson1_deserializeOpGetSearchResultExportJob) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpGetSearchResultExportJob) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorGetSearchResultExportJob(response, &metadata)
+ }
+ output := &GetSearchResultExportJobOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentGetSearchResultExportJobOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorGetSearchResultExportJob(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentGetSearchResultExportJobOutput(v **GetSearchResultExportJobOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *GetSearchResultExportJobOutput
+ if *v == nil {
+ sv = &GetSearchResultExportJobOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "CompletionTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.CompletionTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "CreationTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.CreationTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "ExportJobArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ExportJobArn to be of type string, got %T instead", value)
+ }
+ sv.ExportJobArn = ptr.String(jtv)
+ }
+
+ case "ExportJobIdentifier":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected GenericId to be of type string, got %T instead", value)
+ }
+ sv.ExportJobIdentifier = ptr.String(jtv)
+ }
+
+ case "ExportSpecification":
+ if err := awsRestjson1_deserializeDocumentExportSpecification(&sv.ExportSpecification, value); err != nil {
+ return err
+ }
+
+ case "SearchJobArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected SearchJobArn to be of type string, got %T instead", value)
+ }
+ sv.SearchJobArn = ptr.String(jtv)
+ }
+
+ case "Status":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ExportJobStatus to be of type string, got %T instead", value)
+ }
+ sv.Status = types.ExportJobStatus(jtv)
+ }
+
+ case "StatusMessage":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.StatusMessage = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+type awsRestjson1_deserializeOpListSearchJobBackups struct {
+}
+
+func (*awsRestjson1_deserializeOpListSearchJobBackups) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpListSearchJobBackups) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorListSearchJobBackups(response, &metadata)
+ }
+ output := &ListSearchJobBackupsOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentListSearchJobBackupsOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorListSearchJobBackups(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentListSearchJobBackupsOutput(v **ListSearchJobBackupsOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *ListSearchJobBackupsOutput
+ if *v == nil {
+ sv = &ListSearchJobBackupsOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "NextToken":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.NextToken = ptr.String(jtv)
+ }
+
+ case "Results":
+ if err := awsRestjson1_deserializeDocumentSearchJobBackupsResults(&sv.Results, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+type awsRestjson1_deserializeOpListSearchJobResults struct {
+}
+
+func (*awsRestjson1_deserializeOpListSearchJobResults) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpListSearchJobResults) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorListSearchJobResults(response, &metadata)
+ }
+ output := &ListSearchJobResultsOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentListSearchJobResultsOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorListSearchJobResults(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentListSearchJobResultsOutput(v **ListSearchJobResultsOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *ListSearchJobResultsOutput
+ if *v == nil {
+ sv = &ListSearchJobResultsOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "NextToken":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.NextToken = ptr.String(jtv)
+ }
+
+ case "Results":
+ if err := awsRestjson1_deserializeDocumentResults(&sv.Results, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+type awsRestjson1_deserializeOpListSearchJobs struct {
+}
+
+func (*awsRestjson1_deserializeOpListSearchJobs) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpListSearchJobs) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorListSearchJobs(response, &metadata)
+ }
+ output := &ListSearchJobsOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentListSearchJobsOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorListSearchJobs(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentListSearchJobsOutput(v **ListSearchJobsOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *ListSearchJobsOutput
+ if *v == nil {
+ sv = &ListSearchJobsOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "NextToken":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.NextToken = ptr.String(jtv)
+ }
+
+ case "SearchJobs":
+ if err := awsRestjson1_deserializeDocumentSearchJobs(&sv.SearchJobs, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+type awsRestjson1_deserializeOpListSearchResultExportJobs struct {
+}
+
+func (*awsRestjson1_deserializeOpListSearchResultExportJobs) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpListSearchResultExportJobs) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorListSearchResultExportJobs(response, &metadata)
+ }
+ output := &ListSearchResultExportJobsOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentListSearchResultExportJobsOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorListSearchResultExportJobs(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ServiceQuotaExceededException", errorCode):
+ return awsRestjson1_deserializeErrorServiceQuotaExceededException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentListSearchResultExportJobsOutput(v **ListSearchResultExportJobsOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *ListSearchResultExportJobsOutput
+ if *v == nil {
+ sv = &ListSearchResultExportJobsOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "ExportJobs":
+ if err := awsRestjson1_deserializeDocumentExportJobSummaries(&sv.ExportJobs, value); err != nil {
+ return err
+ }
+
+ case "NextToken":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.NextToken = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+type awsRestjson1_deserializeOpListTagsForResource struct {
+}
+
+func (*awsRestjson1_deserializeOpListTagsForResource) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpListTagsForResource) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorListTagsForResource(response, &metadata)
+ }
+ output := &ListTagsForResourceOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentListTagsForResourceOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorListTagsForResource(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentListTagsForResourceOutput(v **ListTagsForResourceOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *ListTagsForResourceOutput
+ if *v == nil {
+ sv = &ListTagsForResourceOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "Tags":
+ if err := awsRestjson1_deserializeDocumentTagMap(&sv.Tags, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+type awsRestjson1_deserializeOpStartSearchJob struct {
+}
+
+func (*awsRestjson1_deserializeOpStartSearchJob) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpStartSearchJob) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorStartSearchJob(response, &metadata)
+ }
+ output := &StartSearchJobOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentStartSearchJobOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorStartSearchJob(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("ConflictException", errorCode):
+ return awsRestjson1_deserializeErrorConflictException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ServiceQuotaExceededException", errorCode):
+ return awsRestjson1_deserializeErrorServiceQuotaExceededException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentStartSearchJobOutput(v **StartSearchJobOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *StartSearchJobOutput
+ if *v == nil {
+ sv = &StartSearchJobOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "CreationTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.CreationTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "SearchJobArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected SearchJobArn to be of type string, got %T instead", value)
+ }
+ sv.SearchJobArn = ptr.String(jtv)
+ }
+
+ case "SearchJobIdentifier":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected GenericId to be of type string, got %T instead", value)
+ }
+ sv.SearchJobIdentifier = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+type awsRestjson1_deserializeOpStartSearchResultExportJob struct {
+}
+
+func (*awsRestjson1_deserializeOpStartSearchResultExportJob) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpStartSearchResultExportJob) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorStartSearchResultExportJob(response, &metadata)
+ }
+ output := &StartSearchResultExportJobOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentStartSearchResultExportJobOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorStartSearchResultExportJob(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("ConflictException", errorCode):
+ return awsRestjson1_deserializeErrorConflictException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ServiceQuotaExceededException", errorCode):
+ return awsRestjson1_deserializeErrorServiceQuotaExceededException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentStartSearchResultExportJobOutput(v **StartSearchResultExportJobOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *StartSearchResultExportJobOutput
+ if *v == nil {
+ sv = &StartSearchResultExportJobOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "ExportJobArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ExportJobArn to be of type string, got %T instead", value)
+ }
+ sv.ExportJobArn = ptr.String(jtv)
+ }
+
+ case "ExportJobIdentifier":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected GenericId to be of type string, got %T instead", value)
+ }
+ sv.ExportJobIdentifier = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+type awsRestjson1_deserializeOpStopSearchJob struct {
+}
+
+func (*awsRestjson1_deserializeOpStopSearchJob) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpStopSearchJob) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorStopSearchJob(response, &metadata)
+ }
+ output := &StopSearchJobOutput{}
+ out.Result = output
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorStopSearchJob(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("ConflictException", errorCode):
+ return awsRestjson1_deserializeErrorConflictException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+type awsRestjson1_deserializeOpTagResource struct {
+}
+
+func (*awsRestjson1_deserializeOpTagResource) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpTagResource) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorTagResource(response, &metadata)
+ }
+ output := &TagResourceOutput{}
+ out.Result = output
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorTagResource(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+type awsRestjson1_deserializeOpUntagResource struct {
+}
+
+func (*awsRestjson1_deserializeOpUntagResource) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpUntagResource) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorUntagResource(response, &metadata)
+ }
+ output := &UntagResourceOutput{}
+ out.Result = output
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorUntagResource(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpHttpBindingsInternalServerException(v *types.InternalServerException, response *smithyhttp.Response) error {
+ if v == nil {
+ return fmt.Errorf("unsupported deserialization for nil %T", v)
+ }
+
+ if headerValues := response.Header.Values("Retry-After"); len(headerValues) != 0 {
+ headerValues[0] = strings.TrimSpace(headerValues[0])
+ vv, err := strconv.ParseInt(headerValues[0], 0, 32)
+ if err != nil {
+ return err
+ }
+ v.RetryAfterSeconds = ptr.Int32(int32(vv))
+ }
+
+ return nil
+}
+func awsRestjson1_deserializeOpHttpBindingsThrottlingException(v *types.ThrottlingException, response *smithyhttp.Response) error {
+ if v == nil {
+ return fmt.Errorf("unsupported deserialization for nil %T", v)
+ }
+
+ if headerValues := response.Header.Values("Retry-After"); len(headerValues) != 0 {
+ headerValues[0] = strings.TrimSpace(headerValues[0])
+ vv, err := strconv.ParseInt(headerValues[0], 0, 32)
+ if err != nil {
+ return err
+ }
+ v.RetryAfterSeconds = ptr.Int32(int32(vv))
+ }
+
+ return nil
+}
+func awsRestjson1_deserializeErrorAccessDeniedException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+ output := &types.AccessDeniedException{}
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ err := awsRestjson1_deserializeDocumentAccessDeniedException(&output, shape)
+
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+
+ return output
+}
+
+func awsRestjson1_deserializeErrorConflictException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+ output := &types.ConflictException{}
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ err := awsRestjson1_deserializeDocumentConflictException(&output, shape)
+
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+
+ return output
+}
+
+func awsRestjson1_deserializeErrorInternalServerException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+ output := &types.InternalServerException{}
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ err := awsRestjson1_deserializeDocumentInternalServerException(&output, shape)
+
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+
+ if err := awsRestjson1_deserializeOpHttpBindingsInternalServerException(output, response); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to decode response error with invalid HTTP bindings, %w", err)}
+ }
+
+ return output
+}
+
+func awsRestjson1_deserializeErrorResourceNotFoundException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+ output := &types.ResourceNotFoundException{}
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ err := awsRestjson1_deserializeDocumentResourceNotFoundException(&output, shape)
+
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+
+ return output
+}
+
+func awsRestjson1_deserializeErrorServiceQuotaExceededException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+ output := &types.ServiceQuotaExceededException{}
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ err := awsRestjson1_deserializeDocumentServiceQuotaExceededException(&output, shape)
+
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+
+ return output
+}
+
+func awsRestjson1_deserializeErrorThrottlingException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+ output := &types.ThrottlingException{}
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ err := awsRestjson1_deserializeDocumentThrottlingException(&output, shape)
+
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+
+ if err := awsRestjson1_deserializeOpHttpBindingsThrottlingException(output, response); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to decode response error with invalid HTTP bindings, %w", err)}
+ }
+
+ return output
+}
+
+func awsRestjson1_deserializeErrorValidationException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+ output := &types.ValidationException{}
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ err := awsRestjson1_deserializeDocumentValidationException(&output, shape)
+
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+
+ return output
+}
+
+func awsRestjson1_deserializeDocumentAccessDeniedException(v **types.AccessDeniedException, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.AccessDeniedException
+ if *v == nil {
+ sv = &types.AccessDeniedException{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "message", "Message":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.Message = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentBackupCreationTimeFilter(v **types.BackupCreationTimeFilter, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.BackupCreationTimeFilter
+ if *v == nil {
+ sv = &types.BackupCreationTimeFilter{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "CreatedAfter":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.CreatedAfter = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "CreatedBefore":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.CreatedBefore = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentConflictException(v **types.ConflictException, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.ConflictException
+ if *v == nil {
+ sv = &types.ConflictException{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "message", "Message":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.Message = ptr.String(jtv)
+ }
+
+ case "resourceId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.ResourceId = ptr.String(jtv)
+ }
+
+ case "resourceType":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.ResourceType = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentCurrentSearchProgress(v **types.CurrentSearchProgress, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.CurrentSearchProgress
+ if *v == nil {
+ sv = &types.CurrentSearchProgress{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "ItemsMatchedCount":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected Long to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.ItemsMatchedCount = ptr.Int64(i64)
+ }
+
+ case "ItemsScannedCount":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected Long to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.ItemsScannedCount = ptr.Int64(i64)
+ }
+
+ case "RecoveryPointsScannedCount":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected Integer to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.RecoveryPointsScannedCount = ptr.Int32(int32(i64))
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentEBSItemFilter(v **types.EBSItemFilter, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.EBSItemFilter
+ if *v == nil {
+ sv = &types.EBSItemFilter{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "CreationTimes":
+ if err := awsRestjson1_deserializeDocumentTimeConditionList(&sv.CreationTimes, value); err != nil {
+ return err
+ }
+
+ case "FilePaths":
+ if err := awsRestjson1_deserializeDocumentStringConditionList(&sv.FilePaths, value); err != nil {
+ return err
+ }
+
+ case "LastModificationTimes":
+ if err := awsRestjson1_deserializeDocumentTimeConditionList(&sv.LastModificationTimes, value); err != nil {
+ return err
+ }
+
+ case "Sizes":
+ if err := awsRestjson1_deserializeDocumentLongConditionList(&sv.Sizes, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentEBSItemFilters(v *[]types.EBSItemFilter, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.EBSItemFilter
+ if *v == nil {
+ cv = []types.EBSItemFilter{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.EBSItemFilter
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentEBSItemFilter(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentEBSResultItem(v **types.EBSResultItem, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.EBSResultItem
+ if *v == nil {
+ sv = &types.EBSResultItem{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "BackupResourceArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.BackupResourceArn = ptr.String(jtv)
+ }
+
+ case "BackupVaultName":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.BackupVaultName = ptr.String(jtv)
+ }
+
+ case "CreationTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.CreationTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "FilePath":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected FilePath to be of type string, got %T instead", value)
+ }
+ sv.FilePath = ptr.String(jtv)
+ }
+
+ case "FileSize":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected Long to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.FileSize = ptr.Int64(i64)
+ }
+
+ case "FileSystemIdentifier":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.FileSystemIdentifier = ptr.String(jtv)
+ }
+
+ case "LastModifiedTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.LastModifiedTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "SourceResourceArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.SourceResourceArn = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentExportJobSummaries(v *[]types.ExportJobSummary, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.ExportJobSummary
+ if *v == nil {
+ cv = []types.ExportJobSummary{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.ExportJobSummary
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentExportJobSummary(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentExportJobSummary(v **types.ExportJobSummary, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.ExportJobSummary
+ if *v == nil {
+ sv = &types.ExportJobSummary{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "CompletionTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.CompletionTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "CreationTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.CreationTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "ExportJobArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ExportJobArn to be of type string, got %T instead", value)
+ }
+ sv.ExportJobArn = ptr.String(jtv)
+ }
+
+ case "ExportJobIdentifier":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected GenericId to be of type string, got %T instead", value)
+ }
+ sv.ExportJobIdentifier = ptr.String(jtv)
+ }
+
+ case "SearchJobArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected SearchJobArn to be of type string, got %T instead", value)
+ }
+ sv.SearchJobArn = ptr.String(jtv)
+ }
+
+ case "Status":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ExportJobStatus to be of type string, got %T instead", value)
+ }
+ sv.Status = types.ExportJobStatus(jtv)
+ }
+
+ case "StatusMessage":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.StatusMessage = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentExportSpecification(v *types.ExportSpecification, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var uv types.ExportSpecification
+loop:
+ for key, value := range shape {
+ if value == nil {
+ continue
+ }
+ switch key {
+ case "s3ExportSpecification":
+ var mv types.S3ExportSpecification
+ destAddr := &mv
+ if err := awsRestjson1_deserializeDocumentS3ExportSpecification(&destAddr, value); err != nil {
+ return err
+ }
+ mv = *destAddr
+ uv = &types.ExportSpecificationMemberS3ExportSpecification{Value: mv}
+ break loop
+
+ default:
+ uv = &types.UnknownUnionMember{Tag: key}
+ break loop
+
+ }
+ }
+ *v = uv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentInternalServerException(v **types.InternalServerException, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.InternalServerException
+ if *v == nil {
+ sv = &types.InternalServerException{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "message", "Message":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.Message = ptr.String(jtv)
+ }
+
+ case "retryAfterSeconds":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected Integer to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.RetryAfterSeconds = ptr.Int32(int32(i64))
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentItemFilters(v **types.ItemFilters, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.ItemFilters
+ if *v == nil {
+ sv = &types.ItemFilters{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "EBSItemFilters":
+ if err := awsRestjson1_deserializeDocumentEBSItemFilters(&sv.EBSItemFilters, value); err != nil {
+ return err
+ }
+
+ case "S3ItemFilters":
+ if err := awsRestjson1_deserializeDocumentS3ItemFilters(&sv.S3ItemFilters, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentLongCondition(v **types.LongCondition, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.LongCondition
+ if *v == nil {
+ sv = &types.LongCondition{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "Operator":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected LongConditionOperator to be of type string, got %T instead", value)
+ }
+ sv.Operator = types.LongConditionOperator(jtv)
+ }
+
+ case "Value":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected Long to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.Value = ptr.Int64(i64)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentLongConditionList(v *[]types.LongCondition, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.LongCondition
+ if *v == nil {
+ cv = []types.LongCondition{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.LongCondition
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentLongCondition(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentRecoveryPointArnList(v *[]string, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []string
+ if *v == nil {
+ cv = []string{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col string
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected RecoveryPoint to be of type string, got %T instead", value)
+ }
+ col = jtv
+ }
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentResourceArnList(v *[]string, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []string
+ if *v == nil {
+ cv = []string{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col string
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ col = jtv
+ }
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentResourceNotFoundException(v **types.ResourceNotFoundException, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.ResourceNotFoundException
+ if *v == nil {
+ sv = &types.ResourceNotFoundException{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "message", "Message":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.Message = ptr.String(jtv)
+ }
+
+ case "resourceId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.ResourceId = ptr.String(jtv)
+ }
+
+ case "resourceType":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.ResourceType = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentResourceTypeList(v *[]types.ResourceType, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.ResourceType
+ if *v == nil {
+ cv = []types.ResourceType{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.ResourceType
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ResourceType to be of type string, got %T instead", value)
+ }
+ col = types.ResourceType(jtv)
+ }
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentResultItem(v *types.ResultItem, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var uv types.ResultItem
+loop:
+ for key, value := range shape {
+ if value == nil {
+ continue
+ }
+ switch key {
+ case "EBSResultItem":
+ var mv types.EBSResultItem
+ destAddr := &mv
+ if err := awsRestjson1_deserializeDocumentEBSResultItem(&destAddr, value); err != nil {
+ return err
+ }
+ mv = *destAddr
+ uv = &types.ResultItemMemberEBSResultItem{Value: mv}
+ break loop
+
+ case "S3ResultItem":
+ var mv types.S3ResultItem
+ destAddr := &mv
+ if err := awsRestjson1_deserializeDocumentS3ResultItem(&destAddr, value); err != nil {
+ return err
+ }
+ mv = *destAddr
+ uv = &types.ResultItemMemberS3ResultItem{Value: mv}
+ break loop
+
+ default:
+ uv = &types.UnknownUnionMember{Tag: key}
+ break loop
+
+ }
+ }
+ *v = uv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentResults(v *[]types.ResultItem, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.ResultItem
+ if *v == nil {
+ cv = []types.ResultItem{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.ResultItem
+ if err := awsRestjson1_deserializeDocumentResultItem(&col, value); err != nil {
+ return err
+ }
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentS3ExportSpecification(v **types.S3ExportSpecification, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.S3ExportSpecification
+ if *v == nil {
+ sv = &types.S3ExportSpecification{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "DestinationBucket":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.DestinationBucket = ptr.String(jtv)
+ }
+
+ case "DestinationPrefix":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.DestinationPrefix = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentS3ItemFilter(v **types.S3ItemFilter, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.S3ItemFilter
+ if *v == nil {
+ sv = &types.S3ItemFilter{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "CreationTimes":
+ if err := awsRestjson1_deserializeDocumentTimeConditionList(&sv.CreationTimes, value); err != nil {
+ return err
+ }
+
+ case "ETags":
+ if err := awsRestjson1_deserializeDocumentStringConditionList(&sv.ETags, value); err != nil {
+ return err
+ }
+
+ case "ObjectKeys":
+ if err := awsRestjson1_deserializeDocumentStringConditionList(&sv.ObjectKeys, value); err != nil {
+ return err
+ }
+
+ case "Sizes":
+ if err := awsRestjson1_deserializeDocumentLongConditionList(&sv.Sizes, value); err != nil {
+ return err
+ }
+
+ case "VersionIds":
+ if err := awsRestjson1_deserializeDocumentStringConditionList(&sv.VersionIds, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentS3ItemFilters(v *[]types.S3ItemFilter, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.S3ItemFilter
+ if *v == nil {
+ cv = []types.S3ItemFilter{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.S3ItemFilter
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentS3ItemFilter(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentS3ResultItem(v **types.S3ResultItem, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.S3ResultItem
+ if *v == nil {
+ sv = &types.S3ResultItem{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "BackupResourceArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.BackupResourceArn = ptr.String(jtv)
+ }
+
+ case "BackupVaultName":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.BackupVaultName = ptr.String(jtv)
+ }
+
+ case "CreationTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.CreationTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "ETag":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.ETag = ptr.String(jtv)
+ }
+
+ case "ObjectKey":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ObjectKey to be of type string, got %T instead", value)
+ }
+ sv.ObjectKey = ptr.String(jtv)
+ }
+
+ case "ObjectSize":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected Long to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.ObjectSize = ptr.Int64(i64)
+ }
+
+ case "SourceResourceArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.SourceResourceArn = ptr.String(jtv)
+ }
+
+ case "VersionId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.VersionId = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentSearchJobBackupsResult(v **types.SearchJobBackupsResult, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.SearchJobBackupsResult
+ if *v == nil {
+ sv = &types.SearchJobBackupsResult{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "BackupCreationTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.BackupCreationTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "BackupResourceArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.BackupResourceArn = ptr.String(jtv)
+ }
+
+ case "IndexCreationTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.IndexCreationTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "ResourceType":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ResourceType to be of type string, got %T instead", value)
+ }
+ sv.ResourceType = types.ResourceType(jtv)
+ }
+
+ case "SourceResourceArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.SourceResourceArn = ptr.String(jtv)
+ }
+
+ case "Status":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected SearchJobState to be of type string, got %T instead", value)
+ }
+ sv.Status = types.SearchJobState(jtv)
+ }
+
+ case "StatusMessage":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.StatusMessage = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentSearchJobBackupsResults(v *[]types.SearchJobBackupsResult, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.SearchJobBackupsResult
+ if *v == nil {
+ cv = []types.SearchJobBackupsResult{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.SearchJobBackupsResult
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentSearchJobBackupsResult(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentSearchJobs(v *[]types.SearchJobSummary, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.SearchJobSummary
+ if *v == nil {
+ cv = []types.SearchJobSummary{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.SearchJobSummary
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentSearchJobSummary(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentSearchJobSummary(v **types.SearchJobSummary, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.SearchJobSummary
+ if *v == nil {
+ sv = &types.SearchJobSummary{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "CompletionTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.CompletionTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "CreationTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.CreationTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ case "Name":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.Name = ptr.String(jtv)
+ }
+
+ case "SearchJobArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected SearchJobArn to be of type string, got %T instead", value)
+ }
+ sv.SearchJobArn = ptr.String(jtv)
+ }
+
+ case "SearchJobIdentifier":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected GenericId to be of type string, got %T instead", value)
+ }
+ sv.SearchJobIdentifier = ptr.String(jtv)
+ }
+
+ case "SearchScopeSummary":
+ if err := awsRestjson1_deserializeDocumentSearchScopeSummary(&sv.SearchScopeSummary, value); err != nil {
+ return err
+ }
+
+ case "Status":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected SearchJobState to be of type string, got %T instead", value)
+ }
+ sv.Status = types.SearchJobState(jtv)
+ }
+
+ case "StatusMessage":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.StatusMessage = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentSearchScope(v **types.SearchScope, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.SearchScope
+ if *v == nil {
+ sv = &types.SearchScope{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "BackupResourceArns":
+ if err := awsRestjson1_deserializeDocumentRecoveryPointArnList(&sv.BackupResourceArns, value); err != nil {
+ return err
+ }
+
+ case "BackupResourceCreationTime":
+ if err := awsRestjson1_deserializeDocumentBackupCreationTimeFilter(&sv.BackupResourceCreationTime, value); err != nil {
+ return err
+ }
+
+ case "BackupResourceTags":
+ if err := awsRestjson1_deserializeDocumentTagMap(&sv.BackupResourceTags, value); err != nil {
+ return err
+ }
+
+ case "BackupResourceTypes":
+ if err := awsRestjson1_deserializeDocumentResourceTypeList(&sv.BackupResourceTypes, value); err != nil {
+ return err
+ }
+
+ case "SourceResourceArns":
+ if err := awsRestjson1_deserializeDocumentResourceArnList(&sv.SourceResourceArns, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentSearchScopeSummary(v **types.SearchScopeSummary, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.SearchScopeSummary
+ if *v == nil {
+ sv = &types.SearchScopeSummary{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "TotalItemsToScanCount":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected Long to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.TotalItemsToScanCount = ptr.Int64(i64)
+ }
+
+ case "TotalRecoveryPointsToScanCount":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected Integer to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.TotalRecoveryPointsToScanCount = ptr.Int32(int32(i64))
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentServiceQuotaExceededException(v **types.ServiceQuotaExceededException, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.ServiceQuotaExceededException
+ if *v == nil {
+ sv = &types.ServiceQuotaExceededException{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "message", "Message":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.Message = ptr.String(jtv)
+ }
+
+ case "quotaCode":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.QuotaCode = ptr.String(jtv)
+ }
+
+ case "resourceId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.ResourceId = ptr.String(jtv)
+ }
+
+ case "resourceType":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.ResourceType = ptr.String(jtv)
+ }
+
+ case "serviceCode":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.ServiceCode = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentStringCondition(v **types.StringCondition, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.StringCondition
+ if *v == nil {
+ sv = &types.StringCondition{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "Operator":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected StringConditionOperator to be of type string, got %T instead", value)
+ }
+ sv.Operator = types.StringConditionOperator(jtv)
+ }
+
+ case "Value":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.Value = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentStringConditionList(v *[]types.StringCondition, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.StringCondition
+ if *v == nil {
+ cv = []types.StringCondition{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.StringCondition
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentStringCondition(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentTagMap(v *map[string]*string, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var mv map[string]*string
+ if *v == nil {
+ mv = map[string]*string{}
+ } else {
+ mv = *v
+ }
+
+ for key, value := range shape {
+ var parsedVal *string
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ parsedVal = ptr.String(jtv)
+ }
+ mv[key] = parsedVal
+
+ }
+ *v = mv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentThrottlingException(v **types.ThrottlingException, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.ThrottlingException
+ if *v == nil {
+ sv = &types.ThrottlingException{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "message", "Message":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.Message = ptr.String(jtv)
+ }
+
+ case "quotaCode":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.QuotaCode = ptr.String(jtv)
+ }
+
+ case "retryAfterSeconds":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected Integer to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.RetryAfterSeconds = ptr.Int32(int32(i64))
+ }
+
+ case "serviceCode":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.ServiceCode = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentTimeCondition(v **types.TimeCondition, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.TimeCondition
+ if *v == nil {
+ sv = &types.TimeCondition{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "Operator":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected TimeConditionOperator to be of type string, got %T instead", value)
+ }
+ sv.Operator = types.TimeConditionOperator(jtv)
+ }
+
+ case "Value":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.Value = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentTimeConditionList(v *[]types.TimeCondition, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.TimeCondition
+ if *v == nil {
+ cv = []types.TimeCondition{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.TimeCondition
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentTimeCondition(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentValidationException(v **types.ValidationException, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.ValidationException
+ if *v == nil {
+ sv = &types.ValidationException{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "message", "Message":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.Message = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
diff --git a/service/backupsearch/doc.go b/service/backupsearch/doc.go
new file mode 100644
index 00000000000..c4f7da40eed
--- /dev/null
+++ b/service/backupsearch/doc.go
@@ -0,0 +1,18 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+// Package backupsearch provides the API client, operations, and parameter types
+// for AWS Backup Search.
+//
+// # Backup Search
+//
+// Backup Search is the recovery point and item level search for Backup.
+//
+// For additional information, see:
+//
+// [Backup API Reference]
+//
+// [Backup Developer Guide]
+//
+// [Backup API Reference]: https://docs.aws.amazon.com/aws-backup/latest/devguide/api-reference.html
+// [Backup Developer Guide]: https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html
+package backupsearch
diff --git a/service/backupsearch/endpoints.go b/service/backupsearch/endpoints.go
new file mode 100644
index 00000000000..808e5c7edf7
--- /dev/null
+++ b/service/backupsearch/endpoints.go
@@ -0,0 +1,491 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "github.com/aws/aws-sdk-go-v2/aws"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ internalConfig "github.com/aws/aws-sdk-go-v2/internal/configsources"
+ "github.com/aws/aws-sdk-go-v2/internal/endpoints"
+ "github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn"
+ internalendpoints "github.com/aws/aws-sdk-go-v2/service/backupsearch/internal/endpoints"
+ smithy "github.com/aws/smithy-go"
+ smithyauth "github.com/aws/smithy-go/auth"
+ smithyendpoints "github.com/aws/smithy-go/endpoints"
+ "github.com/aws/smithy-go/middleware"
+ "github.com/aws/smithy-go/ptr"
+ "github.com/aws/smithy-go/tracing"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+ "net/http"
+ "net/url"
+ "os"
+ "strings"
+)
+
+// EndpointResolverOptions is the service endpoint resolver options
+type EndpointResolverOptions = internalendpoints.Options
+
+// EndpointResolver interface for resolving service endpoints.
+type EndpointResolver interface {
+ ResolveEndpoint(region string, options EndpointResolverOptions) (aws.Endpoint, error)
+}
+
+var _ EndpointResolver = &internalendpoints.Resolver{}
+
+// NewDefaultEndpointResolver constructs a new service endpoint resolver
+func NewDefaultEndpointResolver() *internalendpoints.Resolver {
+ return internalendpoints.New()
+}
+
+// EndpointResolverFunc is a helper utility that wraps a function so it satisfies
+// the EndpointResolver interface. This is useful when you want to add additional
+// endpoint resolving logic, or stub out specific endpoints with custom values.
+type EndpointResolverFunc func(region string, options EndpointResolverOptions) (aws.Endpoint, error)
+
+func (fn EndpointResolverFunc) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error) {
+ return fn(region, options)
+}
+
+// EndpointResolverFromURL returns an EndpointResolver configured using the
+// provided endpoint url. By default, the resolved endpoint resolver uses the
+// client region as signing region, and the endpoint source is set to
+// EndpointSourceCustom.You can provide functional options to configure endpoint
+// values for the resolved endpoint.
+func EndpointResolverFromURL(url string, optFns ...func(*aws.Endpoint)) EndpointResolver {
+ e := aws.Endpoint{URL: url, Source: aws.EndpointSourceCustom}
+ for _, fn := range optFns {
+ fn(&e)
+ }
+
+ return EndpointResolverFunc(
+ func(region string, options EndpointResolverOptions) (aws.Endpoint, error) {
+ if len(e.SigningRegion) == 0 {
+ e.SigningRegion = region
+ }
+ return e, nil
+ },
+ )
+}
+
+type ResolveEndpoint struct {
+ Resolver EndpointResolver
+ Options EndpointResolverOptions
+}
+
+func (*ResolveEndpoint) ID() string {
+ return "ResolveEndpoint"
+}
+
+func (m *ResolveEndpoint) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ if !awsmiddleware.GetRequiresLegacyEndpoints(ctx) {
+ return next.HandleSerialize(ctx, in)
+ }
+
+ req, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+ }
+
+ if m.Resolver == nil {
+ return out, metadata, fmt.Errorf("expected endpoint resolver to not be nil")
+ }
+
+ eo := m.Options
+ eo.Logger = middleware.GetLogger(ctx)
+
+ var endpoint aws.Endpoint
+ endpoint, err = m.Resolver.ResolveEndpoint(awsmiddleware.GetRegion(ctx), eo)
+ if err != nil {
+ nf := (&aws.EndpointNotFoundError{})
+ if errors.As(err, &nf) {
+ ctx = awsmiddleware.SetRequiresLegacyEndpoints(ctx, false)
+ return next.HandleSerialize(ctx, in)
+ }
+ return out, metadata, fmt.Errorf("failed to resolve service endpoint, %w", err)
+ }
+
+ req.URL, err = url.Parse(endpoint.URL)
+ if err != nil {
+ return out, metadata, fmt.Errorf("failed to parse endpoint URL: %w", err)
+ }
+
+ if len(awsmiddleware.GetSigningName(ctx)) == 0 {
+ signingName := endpoint.SigningName
+ if len(signingName) == 0 {
+ signingName = "backup-search"
+ }
+ ctx = awsmiddleware.SetSigningName(ctx, signingName)
+ }
+ ctx = awsmiddleware.SetEndpointSource(ctx, endpoint.Source)
+ ctx = smithyhttp.SetHostnameImmutable(ctx, endpoint.HostnameImmutable)
+ ctx = awsmiddleware.SetSigningRegion(ctx, endpoint.SigningRegion)
+ ctx = awsmiddleware.SetPartitionID(ctx, endpoint.PartitionID)
+ return next.HandleSerialize(ctx, in)
+}
+func addResolveEndpointMiddleware(stack *middleware.Stack, o Options) error {
+ return stack.Serialize.Insert(&ResolveEndpoint{
+ Resolver: o.EndpointResolver,
+ Options: o.EndpointOptions,
+ }, "OperationSerializer", middleware.Before)
+}
+
+func removeResolveEndpointMiddleware(stack *middleware.Stack) error {
+ _, err := stack.Serialize.Remove((&ResolveEndpoint{}).ID())
+ return err
+}
+
+type wrappedEndpointResolver struct {
+ awsResolver aws.EndpointResolverWithOptions
+}
+
+func (w *wrappedEndpointResolver) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error) {
+ return w.awsResolver.ResolveEndpoint(ServiceID, region, options)
+}
+
+type awsEndpointResolverAdaptor func(service, region string) (aws.Endpoint, error)
+
+func (a awsEndpointResolverAdaptor) ResolveEndpoint(service, region string, options ...interface{}) (aws.Endpoint, error) {
+ return a(service, region)
+}
+
+var _ aws.EndpointResolverWithOptions = awsEndpointResolverAdaptor(nil)
+
+// withEndpointResolver returns an aws.EndpointResolverWithOptions that first delegates endpoint resolution to the awsResolver.
+// If awsResolver returns aws.EndpointNotFoundError error, the v1 resolver middleware will swallow the error,
+// and set an appropriate context flag such that fallback will occur when EndpointResolverV2 is invoked
+// via its middleware.
+//
+// If another error (besides aws.EndpointNotFoundError) is returned, then that error will be propagated.
+func withEndpointResolver(awsResolver aws.EndpointResolver, awsResolverWithOptions aws.EndpointResolverWithOptions) EndpointResolver {
+ var resolver aws.EndpointResolverWithOptions
+
+ if awsResolverWithOptions != nil {
+ resolver = awsResolverWithOptions
+ } else if awsResolver != nil {
+ resolver = awsEndpointResolverAdaptor(awsResolver.ResolveEndpoint)
+ }
+
+ return &wrappedEndpointResolver{
+ awsResolver: resolver,
+ }
+}
+
+func finalizeClientEndpointResolverOptions(options *Options) {
+ options.EndpointOptions.LogDeprecated = options.ClientLogMode.IsDeprecatedUsage()
+
+ if len(options.EndpointOptions.ResolvedRegion) == 0 {
+ const fipsInfix = "-fips-"
+ const fipsPrefix = "fips-"
+ const fipsSuffix = "-fips"
+
+ if strings.Contains(options.Region, fipsInfix) ||
+ strings.Contains(options.Region, fipsPrefix) ||
+ strings.Contains(options.Region, fipsSuffix) {
+ options.EndpointOptions.ResolvedRegion = strings.ReplaceAll(strings.ReplaceAll(strings.ReplaceAll(
+ options.Region, fipsInfix, "-"), fipsPrefix, ""), fipsSuffix, "")
+ options.EndpointOptions.UseFIPSEndpoint = aws.FIPSEndpointStateEnabled
+ }
+ }
+
+}
+
+func resolveEndpointResolverV2(options *Options) {
+ if options.EndpointResolverV2 == nil {
+ options.EndpointResolverV2 = NewDefaultEndpointResolverV2()
+ }
+}
+
+func resolveBaseEndpoint(cfg aws.Config, o *Options) {
+ if cfg.BaseEndpoint != nil {
+ o.BaseEndpoint = cfg.BaseEndpoint
+ }
+
+ _, g := os.LookupEnv("AWS_ENDPOINT_URL")
+ _, s := os.LookupEnv("AWS_ENDPOINT_URL_BACKUPSEARCH")
+
+ if g && !s {
+ return
+ }
+
+ value, found, err := internalConfig.ResolveServiceBaseEndpoint(context.Background(), "BackupSearch", cfg.ConfigSources)
+ if found && err == nil {
+ o.BaseEndpoint = &value
+ }
+}
+
+func bindRegion(region string) *string {
+ if region == "" {
+ return nil
+ }
+ return aws.String(endpoints.MapFIPSRegion(region))
+}
+
+// EndpointParameters provides the parameters that influence how endpoints are
+// resolved.
+type EndpointParameters struct {
+ // When true, send this request to the FIPS-compliant regional endpoint. If the
+ // configured endpoint does not have a FIPS compliant endpoint, dispatching the
+ // request will return an error.
+ //
+ // Defaults to false if no value is
+ // provided.
+ //
+ // AWS::UseFIPS
+ UseFIPS *bool
+
+ // Override the endpoint used to send this request
+ //
+ // Parameter is
+ // required.
+ //
+ // SDK::Endpoint
+ Endpoint *string
+
+ // The AWS region used to dispatch the request.
+ //
+ // Parameter is
+ // required.
+ //
+ // AWS::Region
+ Region *string
+}
+
+// ValidateRequired validates required parameters are set.
+func (p EndpointParameters) ValidateRequired() error {
+ if p.UseFIPS == nil {
+ return fmt.Errorf("parameter UseFIPS is required")
+ }
+
+ return nil
+}
+
+// WithDefaults returns a shallow copy of EndpointParameterswith default values
+// applied to members where applicable.
+func (p EndpointParameters) WithDefaults() EndpointParameters {
+ if p.UseFIPS == nil {
+ p.UseFIPS = ptr.Bool(false)
+ }
+ return p
+}
+
+type stringSlice []string
+
+func (s stringSlice) Get(i int) *string {
+ if i < 0 || i >= len(s) {
+ return nil
+ }
+
+ v := s[i]
+ return &v
+}
+
+// EndpointResolverV2 provides the interface for resolving service endpoints.
+type EndpointResolverV2 interface {
+ // ResolveEndpoint attempts to resolve the endpoint with the provided options,
+ // returning the endpoint if found. Otherwise an error is returned.
+ ResolveEndpoint(ctx context.Context, params EndpointParameters) (
+ smithyendpoints.Endpoint, error,
+ )
+}
+
+// resolver provides the implementation for resolving endpoints.
+type resolver struct{}
+
+func NewDefaultEndpointResolverV2() EndpointResolverV2 {
+ return &resolver{}
+}
+
+// ResolveEndpoint attempts to resolve the endpoint with the provided options,
+// returning the endpoint if found. Otherwise an error is returned.
+func (r *resolver) ResolveEndpoint(
+ ctx context.Context, params EndpointParameters,
+) (
+ endpoint smithyendpoints.Endpoint, err error,
+) {
+ params = params.WithDefaults()
+ if err = params.ValidateRequired(); err != nil {
+ return endpoint, fmt.Errorf("endpoint parameters are not valid, %w", err)
+ }
+ _UseFIPS := *params.UseFIPS
+
+ if exprVal := params.Endpoint; exprVal != nil {
+ _Endpoint := *exprVal
+ _ = _Endpoint
+ if _UseFIPS == true {
+ return endpoint, fmt.Errorf("endpoint rule error, %s", "Invalid Configuration: FIPS and custom endpoint are not supported")
+ }
+ uriString := _Endpoint
+
+ uri, err := url.Parse(uriString)
+ if err != nil {
+ return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+ }
+
+ return smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ }, nil
+ }
+ if exprVal := params.Region; exprVal != nil {
+ _Region := *exprVal
+ _ = _Region
+ if exprVal := awsrulesfn.GetPartition(_Region); exprVal != nil {
+ _PartitionResult := *exprVal
+ _ = _PartitionResult
+ if _UseFIPS == true {
+ uriString := func() string {
+ var out strings.Builder
+ out.WriteString("https://backup-search-fips.")
+ out.WriteString(_PartitionResult.ImplicitGlobalRegion)
+ out.WriteString(".")
+ out.WriteString(_PartitionResult.DualStackDnsSuffix)
+ return out.String()
+ }()
+
+ uri, err := url.Parse(uriString)
+ if err != nil {
+ return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+ }
+
+ return smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, _PartitionResult.ImplicitGlobalRegion)
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }, nil
+ }
+ uriString := func() string {
+ var out strings.Builder
+ out.WriteString("https://backup-search.")
+ out.WriteString(_PartitionResult.ImplicitGlobalRegion)
+ out.WriteString(".")
+ out.WriteString(_PartitionResult.DualStackDnsSuffix)
+ return out.String()
+ }()
+
+ uri, err := url.Parse(uriString)
+ if err != nil {
+ return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+ }
+
+ return smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, _PartitionResult.ImplicitGlobalRegion)
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }, nil
+ }
+ return endpoint, fmt.Errorf("Endpoint resolution failed. Invalid operation or environment input.")
+ }
+ return endpoint, fmt.Errorf("endpoint rule error, %s", "Invalid Configuration: Missing Region")
+}
+
+type endpointParamsBinder interface {
+ bindEndpointParams(*EndpointParameters)
+}
+
+func bindEndpointParams(ctx context.Context, input interface{}, options Options) *EndpointParameters {
+ params := &EndpointParameters{}
+
+ params.UseFIPS = aws.Bool(options.EndpointOptions.UseFIPSEndpoint == aws.FIPSEndpointStateEnabled)
+ params.Endpoint = options.BaseEndpoint
+ params.Region = bindRegion(options.Region)
+
+ if b, ok := input.(endpointParamsBinder); ok {
+ b.bindEndpointParams(params)
+ }
+
+ return params
+}
+
+type resolveEndpointV2Middleware struct {
+ options Options
+}
+
+func (*resolveEndpointV2Middleware) ID() string {
+ return "ResolveEndpointV2"
+}
+
+func (m *resolveEndpointV2Middleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+ out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "ResolveEndpoint")
+ defer span.End()
+
+ if awsmiddleware.GetRequiresLegacyEndpoints(ctx) {
+ return next.HandleFinalize(ctx, in)
+ }
+
+ req, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+ }
+
+ if m.options.EndpointResolverV2 == nil {
+ return out, metadata, fmt.Errorf("expected endpoint resolver to not be nil")
+ }
+
+ params := bindEndpointParams(ctx, getOperationInput(ctx), m.options)
+ endpt, err := timeOperationMetric(ctx, "client.call.resolve_endpoint_duration",
+ func() (smithyendpoints.Endpoint, error) {
+ return m.options.EndpointResolverV2.ResolveEndpoint(ctx, *params)
+ })
+ if err != nil {
+ return out, metadata, fmt.Errorf("failed to resolve service endpoint, %w", err)
+ }
+
+ span.SetProperty("client.call.resolved_endpoint", endpt.URI.String())
+
+ if endpt.URI.RawPath == "" && req.URL.RawPath != "" {
+ endpt.URI.RawPath = endpt.URI.Path
+ }
+ req.URL.Scheme = endpt.URI.Scheme
+ req.URL.Host = endpt.URI.Host
+ req.URL.Path = smithyhttp.JoinPath(endpt.URI.Path, req.URL.Path)
+ req.URL.RawPath = smithyhttp.JoinPath(endpt.URI.RawPath, req.URL.RawPath)
+ for k := range endpt.Headers {
+ req.Header.Set(k, endpt.Headers.Get(k))
+ }
+
+ rscheme := getResolvedAuthScheme(ctx)
+ if rscheme == nil {
+ return out, metadata, fmt.Errorf("no resolved auth scheme")
+ }
+
+ opts, _ := smithyauth.GetAuthOptions(&endpt.Properties)
+ for _, o := range opts {
+ rscheme.SignerProperties.SetAll(&o.SignerProperties)
+ }
+
+ span.End()
+ return next.HandleFinalize(ctx, in)
+}
diff --git a/service/backupsearch/endpoints_config_test.go b/service/backupsearch/endpoints_config_test.go
new file mode 100644
index 00000000000..331a713db76
--- /dev/null
+++ b/service/backupsearch/endpoints_config_test.go
@@ -0,0 +1,139 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "github.com/aws/aws-sdk-go-v2/aws"
+ "os"
+ "reflect"
+ "testing"
+)
+
+type mockConfigSource struct {
+ global string
+ service string
+ ignore bool
+}
+
+// GetIgnoreConfiguredEndpoints is used in knowing when to disable configured
+// endpoints feature.
+func (m mockConfigSource) GetIgnoreConfiguredEndpoints(context.Context) (bool, bool, error) {
+ return m.ignore, m.ignore, nil
+}
+
+// GetServiceBaseEndpoint is used to retrieve a normalized SDK ID for use
+// with configured endpoints.
+func (m mockConfigSource) GetServiceBaseEndpoint(ctx context.Context, sdkID string) (string, bool, error) {
+ if m.service != "" {
+ return m.service, true, nil
+ }
+ return "", false, nil
+}
+
+func TestResolveBaseEndpoint(t *testing.T) {
+ cases := map[string]struct {
+ envGlobal string
+ envService string
+ envIgnore bool
+ configGlobal string
+ configService string
+ configIgnore bool
+ clientEndpoint *string
+ expectURL *string
+ }{
+ "env ignore": {
+ envGlobal: "https://env-global.dev",
+ envService: "https://env-backupsearch.dev",
+ envIgnore: true,
+ configGlobal: "http://config-global.dev",
+ configService: "http://config-backupsearch.dev",
+ expectURL: nil,
+ },
+ "env global": {
+ envGlobal: "https://env-global.dev",
+ configGlobal: "http://config-global.dev",
+ configService: "http://config-backupsearch.dev",
+ expectURL: aws.String("https://env-global.dev"),
+ },
+ "env service": {
+ envGlobal: "https://env-global.dev",
+ envService: "https://env-backupsearch.dev",
+ configGlobal: "http://config-global.dev",
+ configService: "http://config-backupsearch.dev",
+ expectURL: aws.String("https://env-backupsearch.dev"),
+ },
+ "config ignore": {
+ envGlobal: "https://env-global.dev",
+ envService: "https://env-backupsearch.dev",
+ configGlobal: "http://config-global.dev",
+ configService: "http://config-backupsearch.dev",
+ configIgnore: true,
+ expectURL: nil,
+ },
+ "config global": {
+ configGlobal: "http://config-global.dev",
+ expectURL: aws.String("http://config-global.dev"),
+ },
+ "config service": {
+ configGlobal: "http://config-global.dev",
+ configService: "http://config-backupsearch.dev",
+ expectURL: aws.String("http://config-backupsearch.dev"),
+ },
+ "client": {
+ envGlobal: "https://env-global.dev",
+ envService: "https://env-backupsearch.dev",
+ configGlobal: "http://config-global.dev",
+ configService: "http://config-backupsearch.dev",
+ clientEndpoint: aws.String("https://client-backupsearch.dev"),
+ expectURL: aws.String("https://client-backupsearch.dev"),
+ },
+ }
+
+ for name, c := range cases {
+ t.Run(name, func(t *testing.T) {
+ os.Clearenv()
+
+ awsConfig := aws.Config{}
+ ignore := c.envIgnore || c.configIgnore
+
+ if c.configGlobal != "" && !ignore {
+ awsConfig.BaseEndpoint = aws.String(c.configGlobal)
+ }
+
+ if c.envGlobal != "" {
+ t.Setenv("AWS_ENDPOINT_URL", c.envGlobal)
+ if !ignore {
+ awsConfig.BaseEndpoint = aws.String(c.envGlobal)
+ }
+ }
+
+ if c.envService != "" {
+ t.Setenv("AWS_ENDPOINT_URL_BACKUPSEARCH", c.envService)
+ }
+
+ awsConfig.ConfigSources = []interface{}{
+ mockConfigSource{
+ global: c.envGlobal,
+ service: c.envService,
+ ignore: c.envIgnore,
+ },
+ mockConfigSource{
+ global: c.configGlobal,
+ service: c.configService,
+ ignore: c.configIgnore,
+ },
+ }
+
+ client := NewFromConfig(awsConfig, func(o *Options) {
+ if c.clientEndpoint != nil {
+ o.BaseEndpoint = c.clientEndpoint
+ }
+ })
+
+ if e, a := c.expectURL, client.options.BaseEndpoint; !reflect.DeepEqual(e, a) {
+ t.Errorf("expect endpoint %v , got %v", e, a)
+ }
+ })
+ }
+}
diff --git a/service/backupsearch/endpoints_test.go b/service/backupsearch/endpoints_test.go
new file mode 100644
index 00000000000..6f263544e8a
--- /dev/null
+++ b/service/backupsearch/endpoints_test.go
@@ -0,0 +1,774 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ smithy "github.com/aws/smithy-go"
+ smithyauth "github.com/aws/smithy-go/auth"
+ smithyendpoints "github.com/aws/smithy-go/endpoints"
+ "github.com/aws/smithy-go/ptr"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+ "net/http"
+ "net/url"
+ "reflect"
+ "strings"
+ "testing"
+)
+
+// For custom endpoint with region not set and fips disabled
+func TestEndpointCase0(t *testing.T) {
+ var params = EndpointParameters{
+ Endpoint: ptr.String("https://example.com"),
+ UseFIPS: ptr.Bool(false),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://example.com")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: smithy.Properties{},
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For custom endpoint with fips enabled
+func TestEndpointCase1(t *testing.T) {
+ var params = EndpointParameters{
+ Endpoint: ptr.String("https://example.com"),
+ UseFIPS: ptr.Bool(true),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err == nil {
+ t.Fatalf("expect error, got none")
+ }
+ if e, a := "Invalid Configuration: FIPS and custom endpoint are not supported", err.Error(); !strings.Contains(a, e) {
+ t.Errorf("expect %v error in %v", e, a)
+ }
+}
+
+// For region us-east-1 with FIPS enabled and DualStack enabled
+func TestEndpointCase2(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("us-east-1"),
+ UseFIPS: ptr.Bool(true),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search-fips.us-east-1.api.aws")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region us-east-1 with FIPS disabled and DualStack enabled
+func TestEndpointCase3(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("us-east-1"),
+ UseFIPS: ptr.Bool(false),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search.us-east-1.api.aws")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region cn-northwest-1 with FIPS enabled and DualStack enabled
+func TestEndpointCase4(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("cn-northwest-1"),
+ UseFIPS: ptr.Bool(true),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search-fips.cn-northwest-1.api.amazonwebservices.com.cn")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "cn-northwest-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region cn-northwest-1 with FIPS disabled and DualStack enabled
+func TestEndpointCase5(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("cn-northwest-1"),
+ UseFIPS: ptr.Bool(false),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search.cn-northwest-1.api.amazonwebservices.com.cn")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "cn-northwest-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region us-gov-west-1 with FIPS enabled and DualStack enabled
+func TestEndpointCase6(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("us-gov-west-1"),
+ UseFIPS: ptr.Bool(true),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search-fips.us-gov-west-1.api.aws")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-gov-west-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region us-gov-west-1 with FIPS disabled and DualStack enabled
+func TestEndpointCase7(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("us-gov-west-1"),
+ UseFIPS: ptr.Bool(false),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search.us-gov-west-1.api.aws")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-gov-west-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region us-iso-east-1 with FIPS enabled and DualStack enabled
+func TestEndpointCase8(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("us-iso-east-1"),
+ UseFIPS: ptr.Bool(true),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search-fips.us-iso-east-1.c2s.ic.gov")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-iso-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region us-iso-east-1 with FIPS disabled and DualStack enabled
+func TestEndpointCase9(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("us-iso-east-1"),
+ UseFIPS: ptr.Bool(false),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search.us-iso-east-1.c2s.ic.gov")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-iso-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region us-isob-east-1 with FIPS enabled and DualStack enabled
+func TestEndpointCase10(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("us-isob-east-1"),
+ UseFIPS: ptr.Bool(true),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search-fips.us-isob-east-1.sc2s.sgov.gov")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-isob-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region us-isob-east-1 with FIPS disabled and DualStack enabled
+func TestEndpointCase11(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("us-isob-east-1"),
+ UseFIPS: ptr.Bool(false),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search.us-isob-east-1.sc2s.sgov.gov")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-isob-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region eu-isoe-west-1 with FIPS enabled and DualStack enabled
+func TestEndpointCase12(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("eu-isoe-west-1"),
+ UseFIPS: ptr.Bool(true),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search-fips.eu-isoe-west-1.cloud.adc-e.uk")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "eu-isoe-west-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region eu-isoe-west-1 with FIPS disabled and DualStack enabled
+func TestEndpointCase13(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("eu-isoe-west-1"),
+ UseFIPS: ptr.Bool(false),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search.eu-isoe-west-1.cloud.adc-e.uk")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "eu-isoe-west-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region us-isof-south-1 with FIPS enabled and DualStack enabled
+func TestEndpointCase14(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("us-isof-south-1"),
+ UseFIPS: ptr.Bool(true),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search-fips.us-isof-south-1.csp.hci.ic.gov")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-isof-south-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region us-isof-south-1 with FIPS disabled and DualStack enabled
+func TestEndpointCase15(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("us-isof-south-1"),
+ UseFIPS: ptr.Bool(false),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://backup-search.us-isof-south-1.csp.hci.ic.gov")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-isof-south-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// Missing region
+func TestEndpointCase16(t *testing.T) {
+ var params = EndpointParameters{}
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err == nil {
+ t.Fatalf("expect error, got none")
+ }
+ if e, a := "Invalid Configuration: Missing Region", err.Error(); !strings.Contains(a, e) {
+ t.Errorf("expect %v error in %v", e, a)
+ }
+}
diff --git a/service/backupsearch/generated.json b/service/backupsearch/generated.json
new file mode 100644
index 00000000000..5c2351d743e
--- /dev/null
+++ b/service/backupsearch/generated.json
@@ -0,0 +1,45 @@
+{
+ "dependencies": {
+ "github.com/aws/aws-sdk-go-v2": "v1.4.0",
+ "github.com/aws/aws-sdk-go-v2/internal/configsources": "v0.0.0-00010101000000-000000000000",
+ "github.com/aws/aws-sdk-go-v2/internal/endpoints/v2": "v2.0.0-00010101000000-000000000000",
+ "github.com/aws/smithy-go": "v1.4.0"
+ },
+ "files": [
+ "api_client.go",
+ "api_client_test.go",
+ "api_op_GetSearchJob.go",
+ "api_op_GetSearchResultExportJob.go",
+ "api_op_ListSearchJobBackups.go",
+ "api_op_ListSearchJobResults.go",
+ "api_op_ListSearchJobs.go",
+ "api_op_ListSearchResultExportJobs.go",
+ "api_op_ListTagsForResource.go",
+ "api_op_StartSearchJob.go",
+ "api_op_StartSearchResultExportJob.go",
+ "api_op_StopSearchJob.go",
+ "api_op_TagResource.go",
+ "api_op_UntagResource.go",
+ "auth.go",
+ "deserializers.go",
+ "doc.go",
+ "endpoints.go",
+ "endpoints_config_test.go",
+ "endpoints_test.go",
+ "generated.json",
+ "internal/endpoints/endpoints.go",
+ "internal/endpoints/endpoints_test.go",
+ "options.go",
+ "protocol_test.go",
+ "serializers.go",
+ "snapshot_test.go",
+ "types/enums.go",
+ "types/errors.go",
+ "types/types.go",
+ "types/types_exported_test.go",
+ "validators.go"
+ ],
+ "go": "1.15",
+ "module": "github.com/aws/aws-sdk-go-v2/service/backupsearch",
+ "unstable": false
+}
diff --git a/service/backupsearch/go.mod b/service/backupsearch/go.mod
new file mode 100644
index 00000000000..ca0e44c9bbc
--- /dev/null
+++ b/service/backupsearch/go.mod
@@ -0,0 +1,16 @@
+module github.com/aws/aws-sdk-go-v2/service/backupsearch
+
+go 1.21
+
+require (
+ github.com/aws/aws-sdk-go-v2 v1.32.6
+ github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.25
+ github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.25
+ github.com/aws/smithy-go v1.22.1
+)
+
+replace github.com/aws/aws-sdk-go-v2 => ../../
+
+replace github.com/aws/aws-sdk-go-v2/internal/configsources => ../../internal/configsources/
+
+replace github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 => ../../internal/endpoints/v2/
diff --git a/service/backupsearch/go.sum b/service/backupsearch/go.sum
new file mode 100644
index 00000000000..bd2678891af
--- /dev/null
+++ b/service/backupsearch/go.sum
@@ -0,0 +1,2 @@
+github.com/aws/smithy-go v1.22.1 h1:/HPHZQ0g7f4eUeK6HKglFz8uwVfZKgoI25rb/J+dnro=
+github.com/aws/smithy-go v1.22.1/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
diff --git a/service/backupsearch/go_module_metadata.go b/service/backupsearch/go_module_metadata.go
new file mode 100644
index 00000000000..954ee0914a7
--- /dev/null
+++ b/service/backupsearch/go_module_metadata.go
@@ -0,0 +1,6 @@
+// Code generated by internal/repotools/cmd/updatemodulemeta DO NOT EDIT.
+
+package backupsearch
+
+// goModuleVersion is the tagged release for this module
+const goModuleVersion = "1.0.0"
diff --git a/service/backupsearch/internal/endpoints/endpoints.go b/service/backupsearch/internal/endpoints/endpoints.go
new file mode 100644
index 00000000000..9c40aa2adff
--- /dev/null
+++ b/service/backupsearch/internal/endpoints/endpoints.go
@@ -0,0 +1,296 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package endpoints
+
+import (
+ "github.com/aws/aws-sdk-go-v2/aws"
+ endpoints "github.com/aws/aws-sdk-go-v2/internal/endpoints/v2"
+ "github.com/aws/smithy-go/logging"
+ "regexp"
+)
+
+// Options is the endpoint resolver configuration options
+type Options struct {
+ // Logger is a logging implementation that log events should be sent to.
+ Logger logging.Logger
+
+ // LogDeprecated indicates that deprecated endpoints should be logged to the
+ // provided logger.
+ LogDeprecated bool
+
+ // ResolvedRegion is used to override the region to be resolved, rather then the
+ // using the value passed to the ResolveEndpoint method. This value is used by the
+ // SDK to translate regions like fips-us-east-1 or us-east-1-fips to an alternative
+ // name. You must not set this value directly in your application.
+ ResolvedRegion string
+
+ // DisableHTTPS informs the resolver to return an endpoint that does not use the
+ // HTTPS scheme.
+ DisableHTTPS bool
+
+ // UseDualStackEndpoint specifies the resolver must resolve a dual-stack endpoint.
+ UseDualStackEndpoint aws.DualStackEndpointState
+
+ // UseFIPSEndpoint specifies the resolver must resolve a FIPS endpoint.
+ UseFIPSEndpoint aws.FIPSEndpointState
+}
+
+func (o Options) GetResolvedRegion() string {
+ return o.ResolvedRegion
+}
+
+func (o Options) GetDisableHTTPS() bool {
+ return o.DisableHTTPS
+}
+
+func (o Options) GetUseDualStackEndpoint() aws.DualStackEndpointState {
+ return o.UseDualStackEndpoint
+}
+
+func (o Options) GetUseFIPSEndpoint() aws.FIPSEndpointState {
+ return o.UseFIPSEndpoint
+}
+
+func transformToSharedOptions(options Options) endpoints.Options {
+ return endpoints.Options{
+ Logger: options.Logger,
+ LogDeprecated: options.LogDeprecated,
+ ResolvedRegion: options.ResolvedRegion,
+ DisableHTTPS: options.DisableHTTPS,
+ UseDualStackEndpoint: options.UseDualStackEndpoint,
+ UseFIPSEndpoint: options.UseFIPSEndpoint,
+ }
+}
+
+// Resolver BackupSearch endpoint resolver
+type Resolver struct {
+ partitions endpoints.Partitions
+}
+
+// ResolveEndpoint resolves the service endpoint for the given region and options
+func (r *Resolver) ResolveEndpoint(region string, options Options) (endpoint aws.Endpoint, err error) {
+ if len(region) == 0 {
+ return endpoint, &aws.MissingRegionError{}
+ }
+
+ opt := transformToSharedOptions(options)
+ return r.partitions.ResolveEndpoint(region, opt)
+}
+
+// New returns a new Resolver
+func New() *Resolver {
+ return &Resolver{
+ partitions: defaultPartitions,
+ }
+}
+
+var partitionRegexp = struct {
+ Aws *regexp.Regexp
+ AwsCn *regexp.Regexp
+ AwsIso *regexp.Regexp
+ AwsIsoB *regexp.Regexp
+ AwsIsoE *regexp.Regexp
+ AwsIsoF *regexp.Regexp
+ AwsUsGov *regexp.Regexp
+}{
+
+ Aws: regexp.MustCompile("^(us|eu|ap|sa|ca|me|af|il|mx)\\-\\w+\\-\\d+$"),
+ AwsCn: regexp.MustCompile("^cn\\-\\w+\\-\\d+$"),
+ AwsIso: regexp.MustCompile("^us\\-iso\\-\\w+\\-\\d+$"),
+ AwsIsoB: regexp.MustCompile("^us\\-isob\\-\\w+\\-\\d+$"),
+ AwsIsoE: regexp.MustCompile("^eu\\-isoe\\-\\w+\\-\\d+$"),
+ AwsIsoF: regexp.MustCompile("^us\\-isof\\-\\w+\\-\\d+$"),
+ AwsUsGov: regexp.MustCompile("^us\\-gov\\-\\w+\\-\\d+$"),
+}
+
+var defaultPartitions = endpoints.Partitions{
+ {
+ ID: "aws",
+ Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+ {
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "backup-search.{region}.api.aws",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ {
+ Variant: endpoints.FIPSVariant,
+ }: {
+ Hostname: "backup-search-fips.{region}.amazonaws.com",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ {
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "backup-search-fips.{region}.api.aws",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ {
+ Variant: 0,
+ }: {
+ Hostname: "backup-search.{region}.amazonaws.com",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ },
+ RegionRegex: partitionRegexp.Aws,
+ IsRegionalized: true,
+ },
+ {
+ ID: "aws-cn",
+ Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+ {
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "backup-search.{region}.api.amazonwebservices.com.cn",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ {
+ Variant: endpoints.FIPSVariant,
+ }: {
+ Hostname: "backup-search-fips.{region}.amazonaws.com.cn",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ {
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "backup-search-fips.{region}.api.amazonwebservices.com.cn",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ {
+ Variant: 0,
+ }: {
+ Hostname: "backup-search.{region}.amazonaws.com.cn",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ },
+ RegionRegex: partitionRegexp.AwsCn,
+ IsRegionalized: true,
+ },
+ {
+ ID: "aws-iso",
+ Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+ {
+ Variant: endpoints.FIPSVariant,
+ }: {
+ Hostname: "backup-search-fips.{region}.c2s.ic.gov",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ {
+ Variant: 0,
+ }: {
+ Hostname: "backup-search.{region}.c2s.ic.gov",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ },
+ RegionRegex: partitionRegexp.AwsIso,
+ IsRegionalized: true,
+ },
+ {
+ ID: "aws-iso-b",
+ Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+ {
+ Variant: endpoints.FIPSVariant,
+ }: {
+ Hostname: "backup-search-fips.{region}.sc2s.sgov.gov",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ {
+ Variant: 0,
+ }: {
+ Hostname: "backup-search.{region}.sc2s.sgov.gov",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ },
+ RegionRegex: partitionRegexp.AwsIsoB,
+ IsRegionalized: true,
+ },
+ {
+ ID: "aws-iso-e",
+ Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+ {
+ Variant: endpoints.FIPSVariant,
+ }: {
+ Hostname: "backup-search-fips.{region}.cloud.adc-e.uk",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ {
+ Variant: 0,
+ }: {
+ Hostname: "backup-search.{region}.cloud.adc-e.uk",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ },
+ RegionRegex: partitionRegexp.AwsIsoE,
+ IsRegionalized: true,
+ },
+ {
+ ID: "aws-iso-f",
+ Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+ {
+ Variant: endpoints.FIPSVariant,
+ }: {
+ Hostname: "backup-search-fips.{region}.csp.hci.ic.gov",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ {
+ Variant: 0,
+ }: {
+ Hostname: "backup-search.{region}.csp.hci.ic.gov",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ },
+ RegionRegex: partitionRegexp.AwsIsoF,
+ IsRegionalized: true,
+ },
+ {
+ ID: "aws-us-gov",
+ Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+ {
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "backup-search.{region}.api.aws",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ {
+ Variant: endpoints.FIPSVariant,
+ }: {
+ Hostname: "backup-search-fips.{region}.amazonaws.com",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ {
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "backup-search-fips.{region}.api.aws",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ {
+ Variant: 0,
+ }: {
+ Hostname: "backup-search.{region}.amazonaws.com",
+ Protocols: []string{"https"},
+ SignatureVersions: []string{"v4"},
+ },
+ },
+ RegionRegex: partitionRegexp.AwsUsGov,
+ IsRegionalized: true,
+ },
+}
diff --git a/service/backupsearch/internal/endpoints/endpoints_test.go b/service/backupsearch/internal/endpoints/endpoints_test.go
new file mode 100644
index 00000000000..08e5da2d833
--- /dev/null
+++ b/service/backupsearch/internal/endpoints/endpoints_test.go
@@ -0,0 +1,11 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package endpoints
+
+import (
+ "testing"
+)
+
+func TestRegexCompile(t *testing.T) {
+ _ = defaultPartitions
+}
diff --git a/service/backupsearch/options.go b/service/backupsearch/options.go
new file mode 100644
index 00000000000..c76179ced9c
--- /dev/null
+++ b/service/backupsearch/options.go
@@ -0,0 +1,232 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "github.com/aws/aws-sdk-go-v2/aws"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ internalauthsmithy "github.com/aws/aws-sdk-go-v2/internal/auth/smithy"
+ smithyauth "github.com/aws/smithy-go/auth"
+ "github.com/aws/smithy-go/logging"
+ "github.com/aws/smithy-go/metrics"
+ "github.com/aws/smithy-go/middleware"
+ "github.com/aws/smithy-go/tracing"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+ "net/http"
+)
+
+type HTTPClient interface {
+ Do(*http.Request) (*http.Response, error)
+}
+
+type Options struct {
+ // Set of options to modify how an operation is invoked. These apply to all
+ // operations invoked for this client. Use functional options on operation call to
+ // modify this list for per operation behavior.
+ APIOptions []func(*middleware.Stack) error
+
+ // The optional application specific identifier appended to the User-Agent header.
+ AppID string
+
+ // This endpoint will be given as input to an EndpointResolverV2. It is used for
+ // providing a custom base endpoint that is subject to modifications by the
+ // processing EndpointResolverV2.
+ BaseEndpoint *string
+
+ // Configures the events that will be sent to the configured logger.
+ ClientLogMode aws.ClientLogMode
+
+ // The credentials object to use when signing requests.
+ Credentials aws.CredentialsProvider
+
+ // The configuration DefaultsMode that the SDK should use when constructing the
+ // clients initial default settings.
+ DefaultsMode aws.DefaultsMode
+
+ // The endpoint options to be used when attempting to resolve an endpoint.
+ EndpointOptions EndpointResolverOptions
+
+ // The service endpoint resolver.
+ //
+ // Deprecated: Deprecated: EndpointResolver and WithEndpointResolver. Providing a
+ // value for this field will likely prevent you from using any endpoint-related
+ // service features released after the introduction of EndpointResolverV2 and
+ // BaseEndpoint.
+ //
+ // To migrate an EndpointResolver implementation that uses a custom endpoint, set
+ // the client option BaseEndpoint instead.
+ EndpointResolver EndpointResolver
+
+ // Resolves the endpoint used for a particular service operation. This should be
+ // used over the deprecated EndpointResolver.
+ EndpointResolverV2 EndpointResolverV2
+
+ // Signature Version 4 (SigV4) Signer
+ HTTPSignerV4 HTTPSignerV4
+
+ // The logger writer interface to write logging messages to.
+ Logger logging.Logger
+
+ // The client meter provider.
+ MeterProvider metrics.MeterProvider
+
+ // The region to send requests to. (Required)
+ Region string
+
+ // RetryMaxAttempts specifies the maximum number attempts an API client will call
+ // an operation that fails with a retryable error. A value of 0 is ignored, and
+ // will not be used to configure the API client created default retryer, or modify
+ // per operation call's retry max attempts.
+ //
+ // If specified in an operation call's functional options with a value that is
+ // different than the constructed client's Options, the Client's Retryer will be
+ // wrapped to use the operation's specific RetryMaxAttempts value.
+ RetryMaxAttempts int
+
+ // RetryMode specifies the retry mode the API client will be created with, if
+ // Retryer option is not also specified.
+ //
+ // When creating a new API Clients this member will only be used if the Retryer
+ // Options member is nil. This value will be ignored if Retryer is not nil.
+ //
+ // Currently does not support per operation call overrides, may in the future.
+ RetryMode aws.RetryMode
+
+ // Retryer guides how HTTP requests should be retried in case of recoverable
+ // failures. When nil the API client will use a default retryer. The kind of
+ // default retry created by the API client can be changed with the RetryMode
+ // option.
+ Retryer aws.Retryer
+
+ // The RuntimeEnvironment configuration, only populated if the DefaultsMode is set
+ // to DefaultsModeAuto and is initialized using config.LoadDefaultConfig . You
+ // should not populate this structure programmatically, or rely on the values here
+ // within your applications.
+ RuntimeEnvironment aws.RuntimeEnvironment
+
+ // The client tracer provider.
+ TracerProvider tracing.TracerProvider
+
+ // The initial DefaultsMode used when the client options were constructed. If the
+ // DefaultsMode was set to aws.DefaultsModeAuto this will store what the resolved
+ // value was at that point in time.
+ //
+ // Currently does not support per operation call overrides, may in the future.
+ resolvedDefaultsMode aws.DefaultsMode
+
+ // The HTTP client to invoke API calls with. Defaults to client's default HTTP
+ // implementation if nil.
+ HTTPClient HTTPClient
+
+ // The auth scheme resolver which determines how to authenticate for each
+ // operation.
+ AuthSchemeResolver AuthSchemeResolver
+
+ // The list of auth schemes supported by the client.
+ AuthSchemes []smithyhttp.AuthScheme
+}
+
+// Copy creates a clone where the APIOptions list is deep copied.
+func (o Options) Copy() Options {
+ to := o
+ to.APIOptions = make([]func(*middleware.Stack) error, len(o.APIOptions))
+ copy(to.APIOptions, o.APIOptions)
+
+ return to
+}
+
+func (o Options) GetIdentityResolver(schemeID string) smithyauth.IdentityResolver {
+ if schemeID == "aws.auth#sigv4" {
+ return getSigV4IdentityResolver(o)
+ }
+ if schemeID == "smithy.api#noAuth" {
+ return &smithyauth.AnonymousIdentityResolver{}
+ }
+ return nil
+}
+
+// WithAPIOptions returns a functional option for setting the Client's APIOptions
+// option.
+func WithAPIOptions(optFns ...func(*middleware.Stack) error) func(*Options) {
+ return func(o *Options) {
+ o.APIOptions = append(o.APIOptions, optFns...)
+ }
+}
+
+// Deprecated: EndpointResolver and WithEndpointResolver. Providing a value for
+// this field will likely prevent you from using any endpoint-related service
+// features released after the introduction of EndpointResolverV2 and BaseEndpoint.
+//
+// To migrate an EndpointResolver implementation that uses a custom endpoint, set
+// the client option BaseEndpoint instead.
+func WithEndpointResolver(v EndpointResolver) func(*Options) {
+ return func(o *Options) {
+ o.EndpointResolver = v
+ }
+}
+
+// WithEndpointResolverV2 returns a functional option for setting the Client's
+// EndpointResolverV2 option.
+func WithEndpointResolverV2(v EndpointResolverV2) func(*Options) {
+ return func(o *Options) {
+ o.EndpointResolverV2 = v
+ }
+}
+
+func getSigV4IdentityResolver(o Options) smithyauth.IdentityResolver {
+ if o.Credentials != nil {
+ return &internalauthsmithy.CredentialsProviderAdapter{Provider: o.Credentials}
+ }
+ return nil
+}
+
+// WithSigV4SigningName applies an override to the authentication workflow to
+// use the given signing name for SigV4-authenticated operations.
+//
+// This is an advanced setting. The value here is FINAL, taking precedence over
+// the resolved signing name from both auth scheme resolution and endpoint
+// resolution.
+func WithSigV4SigningName(name string) func(*Options) {
+ fn := func(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+ ) {
+ return next.HandleInitialize(awsmiddleware.SetSigningName(ctx, name), in)
+ }
+ return func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(s *middleware.Stack) error {
+ return s.Initialize.Add(
+ middleware.InitializeMiddlewareFunc("withSigV4SigningName", fn),
+ middleware.Before,
+ )
+ })
+ }
+}
+
+// WithSigV4SigningRegion applies an override to the authentication workflow to
+// use the given signing region for SigV4-authenticated operations.
+//
+// This is an advanced setting. The value here is FINAL, taking precedence over
+// the resolved signing region from both auth scheme resolution and endpoint
+// resolution.
+func WithSigV4SigningRegion(region string) func(*Options) {
+ fn := func(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+ ) {
+ return next.HandleInitialize(awsmiddleware.SetSigningRegion(ctx, region), in)
+ }
+ return func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(s *middleware.Stack) error {
+ return s.Initialize.Add(
+ middleware.InitializeMiddlewareFunc("withSigV4SigningRegion", fn),
+ middleware.Before,
+ )
+ })
+ }
+}
+
+func ignoreAnonymousAuth(options *Options) {
+ if aws.IsCredentialsProvider(options.Credentials, (*aws.AnonymousCredentials)(nil)) {
+ options.Credentials = nil
+ }
+}
diff --git a/service/backupsearch/protocol_test.go b/service/backupsearch/protocol_test.go
new file mode 100644
index 00000000000..65bcce75078
--- /dev/null
+++ b/service/backupsearch/protocol_test.go
@@ -0,0 +1,3 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
diff --git a/service/backupsearch/serializers.go b/service/backupsearch/serializers.go
new file mode 100644
index 00000000000..3a3c60554d0
--- /dev/null
+++ b/service/backupsearch/serializers.go
@@ -0,0 +1,1357 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "bytes"
+ "context"
+ "fmt"
+ "github.com/aws/aws-sdk-go-v2/service/backupsearch/types"
+ smithy "github.com/aws/smithy-go"
+ "github.com/aws/smithy-go/encoding/httpbinding"
+ smithyjson "github.com/aws/smithy-go/encoding/json"
+ "github.com/aws/smithy-go/middleware"
+ smithytime "github.com/aws/smithy-go/time"
+ "github.com/aws/smithy-go/tracing"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+type awsRestjson1_serializeOpGetSearchJob struct {
+}
+
+func (*awsRestjson1_serializeOpGetSearchJob) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpGetSearchJob) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*GetSearchJobInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/search-jobs/{SearchJobIdentifier}")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "GET"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsGetSearchJobInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsGetSearchJobInput(v *GetSearchJobInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.SearchJobIdentifier == nil || len(*v.SearchJobIdentifier) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member SearchJobIdentifier must not be empty")}
+ }
+ if v.SearchJobIdentifier != nil {
+ if err := encoder.SetURI("SearchJobIdentifier").String(*v.SearchJobIdentifier); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpGetSearchResultExportJob struct {
+}
+
+func (*awsRestjson1_serializeOpGetSearchResultExportJob) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpGetSearchResultExportJob) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*GetSearchResultExportJobInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/export-search-jobs/{ExportJobIdentifier}")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "GET"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsGetSearchResultExportJobInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsGetSearchResultExportJobInput(v *GetSearchResultExportJobInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.ExportJobIdentifier == nil || len(*v.ExportJobIdentifier) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member ExportJobIdentifier must not be empty")}
+ }
+ if v.ExportJobIdentifier != nil {
+ if err := encoder.SetURI("ExportJobIdentifier").String(*v.ExportJobIdentifier); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpListSearchJobBackups struct {
+}
+
+func (*awsRestjson1_serializeOpListSearchJobBackups) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpListSearchJobBackups) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*ListSearchJobBackupsInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/search-jobs/{SearchJobIdentifier}/backups")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "GET"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsListSearchJobBackupsInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsListSearchJobBackupsInput(v *ListSearchJobBackupsInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.MaxResults != nil {
+ encoder.SetQuery("maxResults").Integer(*v.MaxResults)
+ }
+
+ if v.NextToken != nil {
+ encoder.SetQuery("nextToken").String(*v.NextToken)
+ }
+
+ if v.SearchJobIdentifier == nil || len(*v.SearchJobIdentifier) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member SearchJobIdentifier must not be empty")}
+ }
+ if v.SearchJobIdentifier != nil {
+ if err := encoder.SetURI("SearchJobIdentifier").String(*v.SearchJobIdentifier); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpListSearchJobResults struct {
+}
+
+func (*awsRestjson1_serializeOpListSearchJobResults) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpListSearchJobResults) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*ListSearchJobResultsInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/search-jobs/{SearchJobIdentifier}/search-results")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "GET"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsListSearchJobResultsInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsListSearchJobResultsInput(v *ListSearchJobResultsInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.MaxResults != nil {
+ encoder.SetQuery("maxResults").Integer(*v.MaxResults)
+ }
+
+ if v.NextToken != nil {
+ encoder.SetQuery("nextToken").String(*v.NextToken)
+ }
+
+ if v.SearchJobIdentifier == nil || len(*v.SearchJobIdentifier) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member SearchJobIdentifier must not be empty")}
+ }
+ if v.SearchJobIdentifier != nil {
+ if err := encoder.SetURI("SearchJobIdentifier").String(*v.SearchJobIdentifier); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpListSearchJobs struct {
+}
+
+func (*awsRestjson1_serializeOpListSearchJobs) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpListSearchJobs) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*ListSearchJobsInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/search-jobs")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "GET"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsListSearchJobsInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsListSearchJobsInput(v *ListSearchJobsInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if len(v.ByStatus) > 0 {
+ encoder.SetQuery("Status").String(string(v.ByStatus))
+ }
+
+ if v.MaxResults != nil {
+ encoder.SetQuery("MaxResults").Integer(*v.MaxResults)
+ }
+
+ if v.NextToken != nil {
+ encoder.SetQuery("NextToken").String(*v.NextToken)
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpListSearchResultExportJobs struct {
+}
+
+func (*awsRestjson1_serializeOpListSearchResultExportJobs) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpListSearchResultExportJobs) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*ListSearchResultExportJobsInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/export-search-jobs")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "GET"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsListSearchResultExportJobsInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsListSearchResultExportJobsInput(v *ListSearchResultExportJobsInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.MaxResults != nil {
+ encoder.SetQuery("MaxResults").Integer(*v.MaxResults)
+ }
+
+ if v.NextToken != nil {
+ encoder.SetQuery("NextToken").String(*v.NextToken)
+ }
+
+ if v.SearchJobIdentifier != nil {
+ encoder.SetQuery("SearchJobIdentifier").String(*v.SearchJobIdentifier)
+ }
+
+ if len(v.Status) > 0 {
+ encoder.SetQuery("Status").String(string(v.Status))
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpListTagsForResource struct {
+}
+
+func (*awsRestjson1_serializeOpListTagsForResource) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpListTagsForResource) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*ListTagsForResourceInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/tags/{ResourceArn}")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "GET"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsListTagsForResourceInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsListTagsForResourceInput(v *ListTagsForResourceInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.ResourceArn == nil || len(*v.ResourceArn) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member ResourceArn must not be empty")}
+ }
+ if v.ResourceArn != nil {
+ if err := encoder.SetURI("ResourceArn").String(*v.ResourceArn); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpStartSearchJob struct {
+}
+
+func (*awsRestjson1_serializeOpStartSearchJob) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpStartSearchJob) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*StartSearchJobInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/search-jobs")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "PUT"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ restEncoder.SetHeader("Content-Type").String("application/json")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsRestjson1_serializeOpDocumentStartSearchJobInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsStartSearchJobInput(v *StartSearchJobInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeOpDocumentStartSearchJobInput(v *StartSearchJobInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.ClientToken != nil {
+ ok := object.Key("ClientToken")
+ ok.String(*v.ClientToken)
+ }
+
+ if v.EncryptionKeyArn != nil {
+ ok := object.Key("EncryptionKeyArn")
+ ok.String(*v.EncryptionKeyArn)
+ }
+
+ if v.ItemFilters != nil {
+ ok := object.Key("ItemFilters")
+ if err := awsRestjson1_serializeDocumentItemFilters(v.ItemFilters, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.Name != nil {
+ ok := object.Key("Name")
+ ok.String(*v.Name)
+ }
+
+ if v.SearchScope != nil {
+ ok := object.Key("SearchScope")
+ if err := awsRestjson1_serializeDocumentSearchScope(v.SearchScope, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.Tags != nil {
+ ok := object.Key("Tags")
+ if err := awsRestjson1_serializeDocumentTagMap(v.Tags, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpStartSearchResultExportJob struct {
+}
+
+func (*awsRestjson1_serializeOpStartSearchResultExportJob) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpStartSearchResultExportJob) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*StartSearchResultExportJobInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/export-search-jobs")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "PUT"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ restEncoder.SetHeader("Content-Type").String("application/json")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsRestjson1_serializeOpDocumentStartSearchResultExportJobInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsStartSearchResultExportJobInput(v *StartSearchResultExportJobInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeOpDocumentStartSearchResultExportJobInput(v *StartSearchResultExportJobInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.ClientToken != nil {
+ ok := object.Key("ClientToken")
+ ok.String(*v.ClientToken)
+ }
+
+ if v.ExportSpecification != nil {
+ ok := object.Key("ExportSpecification")
+ if err := awsRestjson1_serializeDocumentExportSpecification(v.ExportSpecification, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.RoleArn != nil {
+ ok := object.Key("RoleArn")
+ ok.String(*v.RoleArn)
+ }
+
+ if v.SearchJobIdentifier != nil {
+ ok := object.Key("SearchJobIdentifier")
+ ok.String(*v.SearchJobIdentifier)
+ }
+
+ if v.Tags != nil {
+ ok := object.Key("Tags")
+ if err := awsRestjson1_serializeDocumentTagMap(v.Tags, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpStopSearchJob struct {
+}
+
+func (*awsRestjson1_serializeOpStopSearchJob) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpStopSearchJob) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*StopSearchJobInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/search-jobs/{SearchJobIdentifier}/actions/cancel")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "PUT"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsStopSearchJobInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsStopSearchJobInput(v *StopSearchJobInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.SearchJobIdentifier == nil || len(*v.SearchJobIdentifier) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member SearchJobIdentifier must not be empty")}
+ }
+ if v.SearchJobIdentifier != nil {
+ if err := encoder.SetURI("SearchJobIdentifier").String(*v.SearchJobIdentifier); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpTagResource struct {
+}
+
+func (*awsRestjson1_serializeOpTagResource) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpTagResource) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*TagResourceInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/tags/{ResourceArn}")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "POST"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsTagResourceInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ restEncoder.SetHeader("Content-Type").String("application/json")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsRestjson1_serializeOpDocumentTagResourceInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsTagResourceInput(v *TagResourceInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.ResourceArn == nil || len(*v.ResourceArn) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member ResourceArn must not be empty")}
+ }
+ if v.ResourceArn != nil {
+ if err := encoder.SetURI("ResourceArn").String(*v.ResourceArn); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeOpDocumentTagResourceInput(v *TagResourceInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.Tags != nil {
+ ok := object.Key("Tags")
+ if err := awsRestjson1_serializeDocumentTagMap(v.Tags, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpUntagResource struct {
+}
+
+func (*awsRestjson1_serializeOpUntagResource) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpUntagResource) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*UntagResourceInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/tags/{ResourceArn}")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "DELETE"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsUntagResourceInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsUntagResourceInput(v *UntagResourceInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.ResourceArn == nil || len(*v.ResourceArn) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member ResourceArn must not be empty")}
+ }
+ if v.ResourceArn != nil {
+ if err := encoder.SetURI("ResourceArn").String(*v.ResourceArn); err != nil {
+ return err
+ }
+ }
+
+ if v.TagKeys != nil {
+ for i := range v.TagKeys {
+ encoder.AddQuery("tagKeys").String(v.TagKeys[i])
+ }
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentBackupCreationTimeFilter(v *types.BackupCreationTimeFilter, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.CreatedAfter != nil {
+ ok := object.Key("CreatedAfter")
+ ok.Double(smithytime.FormatEpochSeconds(*v.CreatedAfter))
+ }
+
+ if v.CreatedBefore != nil {
+ ok := object.Key("CreatedBefore")
+ ok.Double(smithytime.FormatEpochSeconds(*v.CreatedBefore))
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentEBSItemFilter(v *types.EBSItemFilter, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.CreationTimes != nil {
+ ok := object.Key("CreationTimes")
+ if err := awsRestjson1_serializeDocumentTimeConditionList(v.CreationTimes, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.FilePaths != nil {
+ ok := object.Key("FilePaths")
+ if err := awsRestjson1_serializeDocumentStringConditionList(v.FilePaths, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.LastModificationTimes != nil {
+ ok := object.Key("LastModificationTimes")
+ if err := awsRestjson1_serializeDocumentTimeConditionList(v.LastModificationTimes, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.Sizes != nil {
+ ok := object.Key("Sizes")
+ if err := awsRestjson1_serializeDocumentLongConditionList(v.Sizes, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentEBSItemFilters(v []types.EBSItemFilter, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ if err := awsRestjson1_serializeDocumentEBSItemFilter(&v[i], av); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func awsRestjson1_serializeDocumentExportSpecification(v types.ExportSpecification, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ switch uv := v.(type) {
+ case *types.ExportSpecificationMemberS3ExportSpecification:
+ av := object.Key("s3ExportSpecification")
+ if err := awsRestjson1_serializeDocumentS3ExportSpecification(&uv.Value, av); err != nil {
+ return err
+ }
+
+ default:
+ return fmt.Errorf("attempted to serialize unknown member type %T for union %T", uv, v)
+
+ }
+ return nil
+}
+
+func awsRestjson1_serializeDocumentItemFilters(v *types.ItemFilters, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.EBSItemFilters != nil {
+ ok := object.Key("EBSItemFilters")
+ if err := awsRestjson1_serializeDocumentEBSItemFilters(v.EBSItemFilters, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.S3ItemFilters != nil {
+ ok := object.Key("S3ItemFilters")
+ if err := awsRestjson1_serializeDocumentS3ItemFilters(v.S3ItemFilters, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentLongCondition(v *types.LongCondition, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if len(v.Operator) > 0 {
+ ok := object.Key("Operator")
+ ok.String(string(v.Operator))
+ }
+
+ if v.Value != nil {
+ ok := object.Key("Value")
+ ok.Long(*v.Value)
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentLongConditionList(v []types.LongCondition, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ if err := awsRestjson1_serializeDocumentLongCondition(&v[i], av); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func awsRestjson1_serializeDocumentRecoveryPointArnList(v []string, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ av.String(v[i])
+ }
+ return nil
+}
+
+func awsRestjson1_serializeDocumentResourceArnList(v []string, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ av.String(v[i])
+ }
+ return nil
+}
+
+func awsRestjson1_serializeDocumentResourceTypeList(v []types.ResourceType, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ av.String(string(v[i]))
+ }
+ return nil
+}
+
+func awsRestjson1_serializeDocumentS3ExportSpecification(v *types.S3ExportSpecification, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.DestinationBucket != nil {
+ ok := object.Key("DestinationBucket")
+ ok.String(*v.DestinationBucket)
+ }
+
+ if v.DestinationPrefix != nil {
+ ok := object.Key("DestinationPrefix")
+ ok.String(*v.DestinationPrefix)
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentS3ItemFilter(v *types.S3ItemFilter, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.CreationTimes != nil {
+ ok := object.Key("CreationTimes")
+ if err := awsRestjson1_serializeDocumentTimeConditionList(v.CreationTimes, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.ETags != nil {
+ ok := object.Key("ETags")
+ if err := awsRestjson1_serializeDocumentStringConditionList(v.ETags, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.ObjectKeys != nil {
+ ok := object.Key("ObjectKeys")
+ if err := awsRestjson1_serializeDocumentStringConditionList(v.ObjectKeys, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.Sizes != nil {
+ ok := object.Key("Sizes")
+ if err := awsRestjson1_serializeDocumentLongConditionList(v.Sizes, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.VersionIds != nil {
+ ok := object.Key("VersionIds")
+ if err := awsRestjson1_serializeDocumentStringConditionList(v.VersionIds, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentS3ItemFilters(v []types.S3ItemFilter, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ if err := awsRestjson1_serializeDocumentS3ItemFilter(&v[i], av); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func awsRestjson1_serializeDocumentSearchScope(v *types.SearchScope, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.BackupResourceArns != nil {
+ ok := object.Key("BackupResourceArns")
+ if err := awsRestjson1_serializeDocumentRecoveryPointArnList(v.BackupResourceArns, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.BackupResourceCreationTime != nil {
+ ok := object.Key("BackupResourceCreationTime")
+ if err := awsRestjson1_serializeDocumentBackupCreationTimeFilter(v.BackupResourceCreationTime, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.BackupResourceTags != nil {
+ ok := object.Key("BackupResourceTags")
+ if err := awsRestjson1_serializeDocumentTagMap(v.BackupResourceTags, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.BackupResourceTypes != nil {
+ ok := object.Key("BackupResourceTypes")
+ if err := awsRestjson1_serializeDocumentResourceTypeList(v.BackupResourceTypes, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.SourceResourceArns != nil {
+ ok := object.Key("SourceResourceArns")
+ if err := awsRestjson1_serializeDocumentResourceArnList(v.SourceResourceArns, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentStringCondition(v *types.StringCondition, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if len(v.Operator) > 0 {
+ ok := object.Key("Operator")
+ ok.String(string(v.Operator))
+ }
+
+ if v.Value != nil {
+ ok := object.Key("Value")
+ ok.String(*v.Value)
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentStringConditionList(v []types.StringCondition, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ if err := awsRestjson1_serializeDocumentStringCondition(&v[i], av); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func awsRestjson1_serializeDocumentTagMap(v map[string]*string, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ for key := range v {
+ om := object.Key(key)
+ if vv := v[key]; vv == nil {
+ om.Null()
+ continue
+ }
+ om.String(*v[key])
+ }
+ return nil
+}
+
+func awsRestjson1_serializeDocumentTimeCondition(v *types.TimeCondition, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if len(v.Operator) > 0 {
+ ok := object.Key("Operator")
+ ok.String(string(v.Operator))
+ }
+
+ if v.Value != nil {
+ ok := object.Key("Value")
+ ok.Double(smithytime.FormatEpochSeconds(*v.Value))
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentTimeConditionList(v []types.TimeCondition, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ if err := awsRestjson1_serializeDocumentTimeCondition(&v[i], av); err != nil {
+ return err
+ }
+ }
+ return nil
+}
diff --git a/service/backupsearch/snapshot/api_op_GetSearchJob.go.snap b/service/backupsearch/snapshot/api_op_GetSearchJob.go.snap
new file mode 100644
index 00000000000..9f3b2b9983d
--- /dev/null
+++ b/service/backupsearch/snapshot/api_op_GetSearchJob.go.snap
@@ -0,0 +1,41 @@
+GetSearchJob
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backupsearch/snapshot/api_op_GetSearchResultExportJob.go.snap b/service/backupsearch/snapshot/api_op_GetSearchResultExportJob.go.snap
new file mode 100644
index 00000000000..8b69eec7611
--- /dev/null
+++ b/service/backupsearch/snapshot/api_op_GetSearchResultExportJob.go.snap
@@ -0,0 +1,41 @@
+GetSearchResultExportJob
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backupsearch/snapshot/api_op_ListSearchJobBackups.go.snap b/service/backupsearch/snapshot/api_op_ListSearchJobBackups.go.snap
new file mode 100644
index 00000000000..29d7cf6bb69
--- /dev/null
+++ b/service/backupsearch/snapshot/api_op_ListSearchJobBackups.go.snap
@@ -0,0 +1,41 @@
+ListSearchJobBackups
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backupsearch/snapshot/api_op_ListSearchJobResults.go.snap b/service/backupsearch/snapshot/api_op_ListSearchJobResults.go.snap
new file mode 100644
index 00000000000..0a4b88687bd
--- /dev/null
+++ b/service/backupsearch/snapshot/api_op_ListSearchJobResults.go.snap
@@ -0,0 +1,41 @@
+ListSearchJobResults
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backupsearch/snapshot/api_op_ListSearchJobs.go.snap b/service/backupsearch/snapshot/api_op_ListSearchJobs.go.snap
new file mode 100644
index 00000000000..521803dba06
--- /dev/null
+++ b/service/backupsearch/snapshot/api_op_ListSearchJobs.go.snap
@@ -0,0 +1,40 @@
+ListSearchJobs
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backupsearch/snapshot/api_op_ListSearchResultExportJobs.go.snap b/service/backupsearch/snapshot/api_op_ListSearchResultExportJobs.go.snap
new file mode 100644
index 00000000000..bad10910883
--- /dev/null
+++ b/service/backupsearch/snapshot/api_op_ListSearchResultExportJobs.go.snap
@@ -0,0 +1,40 @@
+ListSearchResultExportJobs
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backupsearch/snapshot/api_op_ListTagsForResource.go.snap b/service/backupsearch/snapshot/api_op_ListTagsForResource.go.snap
new file mode 100644
index 00000000000..071d3ac4e96
--- /dev/null
+++ b/service/backupsearch/snapshot/api_op_ListTagsForResource.go.snap
@@ -0,0 +1,41 @@
+ListTagsForResource
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backupsearch/snapshot/api_op_StartSearchJob.go.snap b/service/backupsearch/snapshot/api_op_StartSearchJob.go.snap
new file mode 100644
index 00000000000..6304fe92c11
--- /dev/null
+++ b/service/backupsearch/snapshot/api_op_StartSearchJob.go.snap
@@ -0,0 +1,41 @@
+StartSearchJob
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backupsearch/snapshot/api_op_StartSearchResultExportJob.go.snap b/service/backupsearch/snapshot/api_op_StartSearchResultExportJob.go.snap
new file mode 100644
index 00000000000..8d3cf91ae56
--- /dev/null
+++ b/service/backupsearch/snapshot/api_op_StartSearchResultExportJob.go.snap
@@ -0,0 +1,41 @@
+StartSearchResultExportJob
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backupsearch/snapshot/api_op_StopSearchJob.go.snap b/service/backupsearch/snapshot/api_op_StopSearchJob.go.snap
new file mode 100644
index 00000000000..f33b184b677
--- /dev/null
+++ b/service/backupsearch/snapshot/api_op_StopSearchJob.go.snap
@@ -0,0 +1,41 @@
+StopSearchJob
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backupsearch/snapshot/api_op_TagResource.go.snap b/service/backupsearch/snapshot/api_op_TagResource.go.snap
new file mode 100644
index 00000000000..ae6f8e0846c
--- /dev/null
+++ b/service/backupsearch/snapshot/api_op_TagResource.go.snap
@@ -0,0 +1,41 @@
+TagResource
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backupsearch/snapshot/api_op_UntagResource.go.snap b/service/backupsearch/snapshot/api_op_UntagResource.go.snap
new file mode 100644
index 00000000000..c7bbe038d98
--- /dev/null
+++ b/service/backupsearch/snapshot/api_op_UntagResource.go.snap
@@ -0,0 +1,41 @@
+UntagResource
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/backupsearch/snapshot_test.go b/service/backupsearch/snapshot_test.go
new file mode 100644
index 00000000000..8b10b41e955
--- /dev/null
+++ b/service/backupsearch/snapshot_test.go
@@ -0,0 +1,350 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+//go:build snapshot
+
+package backupsearch
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "github.com/aws/smithy-go/middleware"
+ "io"
+ "io/fs"
+ "os"
+ "testing"
+)
+
+const ssprefix = "snapshot"
+
+type snapshotOK struct{}
+
+func (snapshotOK) Error() string { return "error: success" }
+
+func createp(path string) (*os.File, error) {
+ if err := os.Mkdir(ssprefix, 0700); err != nil && !errors.Is(err, fs.ErrExist) {
+ return nil, err
+ }
+ return os.Create(path)
+}
+
+func sspath(op string) string {
+ return fmt.Sprintf("%s/api_op_%s.go.snap", ssprefix, op)
+}
+
+func updateSnapshot(stack *middleware.Stack, operation string) error {
+ f, err := createp(sspath(operation))
+ if err != nil {
+ return err
+ }
+ defer f.Close()
+ if _, err := f.Write([]byte(stack.String())); err != nil {
+ return err
+ }
+ return snapshotOK{}
+}
+
+func testSnapshot(stack *middleware.Stack, operation string) error {
+ f, err := os.Open(sspath(operation))
+ if errors.Is(err, fs.ErrNotExist) {
+ return snapshotOK{}
+ }
+ if err != nil {
+ return err
+ }
+ defer f.Close()
+ expected, err := io.ReadAll(f)
+ if err != nil {
+ return err
+ }
+ if actual := stack.String(); actual != string(expected) {
+ return fmt.Errorf("%s != %s", expected, actual)
+ }
+ return snapshotOK{}
+}
+func TestCheckSnapshot_GetSearchJob(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetSearchJob(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "GetSearchJob")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_GetSearchResultExportJob(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetSearchResultExportJob(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "GetSearchResultExportJob")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_ListSearchJobBackups(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListSearchJobBackups(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "ListSearchJobBackups")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_ListSearchJobResults(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListSearchJobResults(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "ListSearchJobResults")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_ListSearchJobs(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListSearchJobs(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "ListSearchJobs")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_ListSearchResultExportJobs(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListSearchResultExportJobs(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "ListSearchResultExportJobs")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_ListTagsForResource(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListTagsForResource(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "ListTagsForResource")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_StartSearchJob(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.StartSearchJob(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "StartSearchJob")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_StartSearchResultExportJob(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.StartSearchResultExportJob(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "StartSearchResultExportJob")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_StopSearchJob(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.StopSearchJob(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "StopSearchJob")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_TagResource(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.TagResource(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "TagResource")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_UntagResource(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UntagResource(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "UntagResource")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+func TestUpdateSnapshot_GetSearchJob(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetSearchJob(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "GetSearchJob")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_GetSearchResultExportJob(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetSearchResultExportJob(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "GetSearchResultExportJob")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_ListSearchJobBackups(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListSearchJobBackups(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "ListSearchJobBackups")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_ListSearchJobResults(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListSearchJobResults(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "ListSearchJobResults")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_ListSearchJobs(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListSearchJobs(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "ListSearchJobs")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_ListSearchResultExportJobs(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListSearchResultExportJobs(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "ListSearchResultExportJobs")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_ListTagsForResource(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListTagsForResource(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "ListTagsForResource")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_StartSearchJob(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.StartSearchJob(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "StartSearchJob")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_StartSearchResultExportJob(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.StartSearchResultExportJob(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "StartSearchResultExportJob")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_StopSearchJob(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.StopSearchJob(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "StopSearchJob")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_TagResource(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.TagResource(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "TagResource")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_UntagResource(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UntagResource(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "UntagResource")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
diff --git a/service/backupsearch/types/enums.go b/service/backupsearch/types/enums.go
new file mode 100644
index 00000000000..f51c64ea189
--- /dev/null
+++ b/service/backupsearch/types/enums.go
@@ -0,0 +1,145 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package types
+
+type ExportJobStatus string
+
+// Enum values for ExportJobStatus
+const (
+ ExportJobStatusRunning ExportJobStatus = "RUNNING"
+ ExportJobStatusFailed ExportJobStatus = "FAILED"
+ ExportJobStatusCompleted ExportJobStatus = "COMPLETED"
+)
+
+// Values returns all known values for ExportJobStatus. Note that this can be
+// expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (ExportJobStatus) Values() []ExportJobStatus {
+ return []ExportJobStatus{
+ "RUNNING",
+ "FAILED",
+ "COMPLETED",
+ }
+}
+
+type LongConditionOperator string
+
+// Enum values for LongConditionOperator
+const (
+ LongConditionOperatorEqualsTo LongConditionOperator = "EQUALS_TO"
+ LongConditionOperatorNotEqualsTo LongConditionOperator = "NOT_EQUALS_TO"
+ LongConditionOperatorLessThanEqualTo LongConditionOperator = "LESS_THAN_EQUAL_TO"
+ LongConditionOperatorGreaterThanEqualTo LongConditionOperator = "GREATER_THAN_EQUAL_TO"
+)
+
+// Values returns all known values for LongConditionOperator. Note that this can
+// be expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (LongConditionOperator) Values() []LongConditionOperator {
+ return []LongConditionOperator{
+ "EQUALS_TO",
+ "NOT_EQUALS_TO",
+ "LESS_THAN_EQUAL_TO",
+ "GREATER_THAN_EQUAL_TO",
+ }
+}
+
+type ResourceType string
+
+// Enum values for ResourceType
+const (
+ ResourceTypeS3 ResourceType = "S3"
+ ResourceTypeEbs ResourceType = "EBS"
+)
+
+// Values returns all known values for ResourceType. Note that this can be
+// expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (ResourceType) Values() []ResourceType {
+ return []ResourceType{
+ "S3",
+ "EBS",
+ }
+}
+
+type SearchJobState string
+
+// Enum values for SearchJobState
+const (
+ SearchJobStateRunning SearchJobState = "RUNNING"
+ SearchJobStateCompleted SearchJobState = "COMPLETED"
+ SearchJobStateStopping SearchJobState = "STOPPING"
+ SearchJobStateStopped SearchJobState = "STOPPED"
+ SearchJobStateFailed SearchJobState = "FAILED"
+)
+
+// Values returns all known values for SearchJobState. Note that this can be
+// expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (SearchJobState) Values() []SearchJobState {
+ return []SearchJobState{
+ "RUNNING",
+ "COMPLETED",
+ "STOPPING",
+ "STOPPED",
+ "FAILED",
+ }
+}
+
+type StringConditionOperator string
+
+// Enum values for StringConditionOperator
+const (
+ StringConditionOperatorEqualsTo StringConditionOperator = "EQUALS_TO"
+ StringConditionOperatorNotEqualsTo StringConditionOperator = "NOT_EQUALS_TO"
+ StringConditionOperatorContains StringConditionOperator = "CONTAINS"
+ StringConditionOperatorDoesNotContain StringConditionOperator = "DOES_NOT_CONTAIN"
+ StringConditionOperatorBeginsWith StringConditionOperator = "BEGINS_WITH"
+ StringConditionOperatorEndsWith StringConditionOperator = "ENDS_WITH"
+ StringConditionOperatorDoesNotBeginWith StringConditionOperator = "DOES_NOT_BEGIN_WITH"
+ StringConditionOperatorDoesNotEndWith StringConditionOperator = "DOES_NOT_END_WITH"
+)
+
+// Values returns all known values for StringConditionOperator. Note that this can
+// be expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (StringConditionOperator) Values() []StringConditionOperator {
+ return []StringConditionOperator{
+ "EQUALS_TO",
+ "NOT_EQUALS_TO",
+ "CONTAINS",
+ "DOES_NOT_CONTAIN",
+ "BEGINS_WITH",
+ "ENDS_WITH",
+ "DOES_NOT_BEGIN_WITH",
+ "DOES_NOT_END_WITH",
+ }
+}
+
+type TimeConditionOperator string
+
+// Enum values for TimeConditionOperator
+const (
+ TimeConditionOperatorEqualsTo TimeConditionOperator = "EQUALS_TO"
+ TimeConditionOperatorNotEqualsTo TimeConditionOperator = "NOT_EQUALS_TO"
+ TimeConditionOperatorLessThanEqualTo TimeConditionOperator = "LESS_THAN_EQUAL_TO"
+ TimeConditionOperatorGreaterThanEqualTo TimeConditionOperator = "GREATER_THAN_EQUAL_TO"
+)
+
+// Values returns all known values for TimeConditionOperator. Note that this can
+// be expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (TimeConditionOperator) Values() []TimeConditionOperator {
+ return []TimeConditionOperator{
+ "EQUALS_TO",
+ "NOT_EQUALS_TO",
+ "LESS_THAN_EQUAL_TO",
+ "GREATER_THAN_EQUAL_TO",
+ }
+}
diff --git a/service/backupsearch/types/errors.go b/service/backupsearch/types/errors.go
new file mode 100644
index 00000000000..553dc187ed3
--- /dev/null
+++ b/service/backupsearch/types/errors.go
@@ -0,0 +1,215 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package types
+
+import (
+ "fmt"
+ smithy "github.com/aws/smithy-go"
+)
+
+// You do not have sufficient access to perform this action.
+type AccessDeniedException struct {
+ Message *string
+
+ ErrorCodeOverride *string
+
+ noSmithyDocumentSerde
+}
+
+func (e *AccessDeniedException) Error() string {
+ return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *AccessDeniedException) ErrorMessage() string {
+ if e.Message == nil {
+ return ""
+ }
+ return *e.Message
+}
+func (e *AccessDeniedException) ErrorCode() string {
+ if e == nil || e.ErrorCodeOverride == nil {
+ return "AccessDeniedException"
+ }
+ return *e.ErrorCodeOverride
+}
+func (e *AccessDeniedException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// This exception occurs when a conflict with a previous successful operation is
+// detected. This generally occurs when the previous operation did not have time to
+// propagate to the host serving the current request.
+//
+// A retry (with appropriate backoff logic) is the recommended response to this
+// exception.
+type ConflictException struct {
+ Message *string
+
+ ErrorCodeOverride *string
+
+ ResourceId *string
+ ResourceType *string
+
+ noSmithyDocumentSerde
+}
+
+func (e *ConflictException) Error() string {
+ return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *ConflictException) ErrorMessage() string {
+ if e.Message == nil {
+ return ""
+ }
+ return *e.Message
+}
+func (e *ConflictException) ErrorCode() string {
+ if e == nil || e.ErrorCodeOverride == nil {
+ return "ConflictException"
+ }
+ return *e.ErrorCodeOverride
+}
+func (e *ConflictException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// An internal server error occurred. Retry your request.
+type InternalServerException struct {
+ Message *string
+
+ ErrorCodeOverride *string
+
+ RetryAfterSeconds *int32
+
+ noSmithyDocumentSerde
+}
+
+func (e *InternalServerException) Error() string {
+ return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *InternalServerException) ErrorMessage() string {
+ if e.Message == nil {
+ return ""
+ }
+ return *e.Message
+}
+func (e *InternalServerException) ErrorCode() string {
+ if e == nil || e.ErrorCodeOverride == nil {
+ return "InternalServerException"
+ }
+ return *e.ErrorCodeOverride
+}
+func (e *InternalServerException) ErrorFault() smithy.ErrorFault { return smithy.FaultServer }
+
+// The resource was not found for this request.
+//
+// Confirm the resource information, such as the ARN or type is correct and
+// exists, then retry the request.
+type ResourceNotFoundException struct {
+ Message *string
+
+ ErrorCodeOverride *string
+
+ ResourceId *string
+ ResourceType *string
+
+ noSmithyDocumentSerde
+}
+
+func (e *ResourceNotFoundException) Error() string {
+ return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *ResourceNotFoundException) ErrorMessage() string {
+ if e.Message == nil {
+ return ""
+ }
+ return *e.Message
+}
+func (e *ResourceNotFoundException) ErrorCode() string {
+ if e == nil || e.ErrorCodeOverride == nil {
+ return "ResourceNotFoundException"
+ }
+ return *e.ErrorCodeOverride
+}
+func (e *ResourceNotFoundException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// The request denied due to exceeding the quota limits permitted.
+type ServiceQuotaExceededException struct {
+ Message *string
+
+ ErrorCodeOverride *string
+
+ ResourceId *string
+ ResourceType *string
+ ServiceCode *string
+ QuotaCode *string
+
+ noSmithyDocumentSerde
+}
+
+func (e *ServiceQuotaExceededException) Error() string {
+ return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *ServiceQuotaExceededException) ErrorMessage() string {
+ if e.Message == nil {
+ return ""
+ }
+ return *e.Message
+}
+func (e *ServiceQuotaExceededException) ErrorCode() string {
+ if e == nil || e.ErrorCodeOverride == nil {
+ return "ServiceQuotaExceededException"
+ }
+ return *e.ErrorCodeOverride
+}
+func (e *ServiceQuotaExceededException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// The request was denied due to request throttling.
+type ThrottlingException struct {
+ Message *string
+
+ ErrorCodeOverride *string
+
+ ServiceCode *string
+ QuotaCode *string
+ RetryAfterSeconds *int32
+
+ noSmithyDocumentSerde
+}
+
+func (e *ThrottlingException) Error() string {
+ return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *ThrottlingException) ErrorMessage() string {
+ if e.Message == nil {
+ return ""
+ }
+ return *e.Message
+}
+func (e *ThrottlingException) ErrorCode() string {
+ if e == nil || e.ErrorCodeOverride == nil {
+ return "ThrottlingException"
+ }
+ return *e.ErrorCodeOverride
+}
+func (e *ThrottlingException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// The input fails to satisfy the constraints specified by a service.
+type ValidationException struct {
+ Message *string
+
+ ErrorCodeOverride *string
+
+ noSmithyDocumentSerde
+}
+
+func (e *ValidationException) Error() string {
+ return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *ValidationException) ErrorMessage() string {
+ if e.Message == nil {
+ return ""
+ }
+ return *e.Message
+}
+func (e *ValidationException) ErrorCode() string {
+ if e == nil || e.ErrorCodeOverride == nil {
+ return "ValidationException"
+ }
+ return *e.ErrorCodeOverride
+}
+func (e *ValidationException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
diff --git a/service/backupsearch/types/types.go b/service/backupsearch/types/types.go
new file mode 100644
index 00000000000..957b7daac26
--- /dev/null
+++ b/service/backupsearch/types/types.go
@@ -0,0 +1,505 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package types
+
+import (
+ smithydocument "github.com/aws/smithy-go/document"
+ "time"
+)
+
+// This filters by recovery points within the CreatedAfter and CreatedBefore
+// timestamps.
+type BackupCreationTimeFilter struct {
+
+ // This timestamp includes recovery points only created after the specified time.
+ CreatedAfter *time.Time
+
+ // This timestamp includes recovery points only created before the specified time.
+ CreatedBefore *time.Time
+
+ noSmithyDocumentSerde
+}
+
+// This contains information results retrieved from a search job that may not have
+// completed.
+type CurrentSearchProgress struct {
+
+ // This number is the sum of all items that match the item filters in a search job
+ // in progress.
+ ItemsMatchedCount *int64
+
+ // This number is the sum of all items that have been scanned so far during a
+ // search job.
+ ItemsScannedCount *int64
+
+ // This number is the sum of all backups that have been scanned so far during a
+ // search job.
+ RecoveryPointsScannedCount *int32
+
+ noSmithyDocumentSerde
+}
+
+// This contains arrays of objects, which may include CreationTimes time condition
+// objects, FilePaths string objects, LastModificationTimes time condition objects,
+type EBSItemFilter struct {
+
+ // You can include 1 to 10 values.
+ //
+ // If one is included, the results will return only items that match.
+ //
+ // If more than one is included, the results will return all items that match any
+ // of the included values.
+ CreationTimes []TimeCondition
+
+ // You can include 1 to 10 values.
+ //
+ // If one file path is included, the results will return only items that match the
+ // file path.
+ //
+ // If more than one file path is included, the results will return all items that
+ // match any of the file paths.
+ FilePaths []StringCondition
+
+ // You can include 1 to 10 values.
+ //
+ // If one is included, the results will return only items that match.
+ //
+ // If more than one is included, the results will return all items that match any
+ // of the included values.
+ LastModificationTimes []TimeCondition
+
+ // You can include 1 to 10 values.
+ //
+ // If one is included, the results will return only items that match.
+ //
+ // If more than one is included, the results will return all items that match any
+ // of the included values.
+ Sizes []LongCondition
+
+ noSmithyDocumentSerde
+}
+
+// These are the items returned in the results of a search of Amazon EBS backup
+// metadata.
+type EBSResultItem struct {
+
+ // These are one or more items in the results that match values for the Amazon
+ // Resource Name (ARN) of recovery points returned in a search of Amazon EBS backup
+ // metadata.
+ BackupResourceArn *string
+
+ // The name of the backup vault.
+ BackupVaultName *string
+
+ // These are one or more items in the results that match values for creation times
+ // returned in a search of Amazon EBS backup metadata.
+ CreationTime *time.Time
+
+ // These are one or more items in the results that match values for file paths
+ // returned in a search of Amazon EBS backup metadata.
+ FilePath *string
+
+ // These are one or more items in the results that match values for file sizes
+ // returned in a search of Amazon EBS backup metadata.
+ FileSize *int64
+
+ // These are one or more items in the results that match values for file systems
+ // returned in a search of Amazon EBS backup metadata.
+ FileSystemIdentifier *string
+
+ // These are one or more items in the results that match values for Last Modified
+ // Time returned in a search of Amazon EBS backup metadata.
+ LastModifiedTime *time.Time
+
+ // These are one or more items in the results that match values for the Amazon
+ // Resource Name (ARN) of source resources returned in a search of Amazon EBS
+ // backup metadata.
+ SourceResourceArn *string
+
+ noSmithyDocumentSerde
+}
+
+// This is the summary of an export job.
+type ExportJobSummary struct {
+
+ // This is the unique string that identifies a specific export job.
+ //
+ // This member is required.
+ ExportJobIdentifier *string
+
+ // This is a timestamp of the time the export job compeleted.
+ CompletionTime *time.Time
+
+ // This is a timestamp of the time the export job was created.
+ CreationTime *time.Time
+
+ // This is the unique ARN (Amazon Resource Name) that belongs to the new export
+ // job.
+ ExportJobArn *string
+
+ // The unique string that identifies the Amazon Resource Name (ARN) of the
+ // specified search job.
+ SearchJobArn *string
+
+ // The status of the export job is one of the following:
+ //
+ // CREATED ; RUNNING ; FAILED ; or COMPLETED .
+ Status ExportJobStatus
+
+ // A status message is a string that is returned for an export job.
+ //
+ // A status message is included for any status other than COMPLETED without issues.
+ StatusMessage *string
+
+ noSmithyDocumentSerde
+}
+
+// This contains the export specification object.
+//
+// The following types satisfy this interface:
+//
+// ExportSpecificationMemberS3ExportSpecification
+type ExportSpecification interface {
+ isExportSpecification()
+}
+
+// This specifies the destination Amazon S3 bucket for the export job. And, if
+// included, it also specifies the destination prefix.
+type ExportSpecificationMemberS3ExportSpecification struct {
+ Value S3ExportSpecification
+
+ noSmithyDocumentSerde
+}
+
+func (*ExportSpecificationMemberS3ExportSpecification) isExportSpecification() {}
+
+// Item Filters represent all input item properties specified when the search was
+// created.
+//
+// Contains either EBSItemFilters or S3ItemFilters
+type ItemFilters struct {
+
+ // This array can contain CreationTimes, FilePaths, LastModificationTimes, or
+ // Sizes objects.
+ EBSItemFilters []EBSItemFilter
+
+ // This array can contain CreationTimes, ETags, ObjectKeys, Sizes, or VersionIds
+ // objects.
+ S3ItemFilters []S3ItemFilter
+
+ noSmithyDocumentSerde
+}
+
+// The long condition contains a Value and can optionally contain an Operator .
+type LongCondition struct {
+
+ // The value of an item included in one of the search item filters.
+ //
+ // This member is required.
+ Value *int64
+
+ // A string that defines what values will be returned.
+ //
+ // If this is included, avoid combinations of operators that will return all
+ // possible values. For example, including both EQUALS_TO and NOT_EQUALS_TO with a
+ // value of 4 will return all values.
+ Operator LongConditionOperator
+
+ noSmithyDocumentSerde
+}
+
+// This is an object representing the item returned in the results of a search for
+// a specific resource type.
+//
+// The following types satisfy this interface:
+//
+// ResultItemMemberEBSResultItem
+// ResultItemMemberS3ResultItem
+type ResultItem interface {
+ isResultItem()
+}
+
+// These are items returned in the search results of an Amazon EBS search.
+type ResultItemMemberEBSResultItem struct {
+ Value EBSResultItem
+
+ noSmithyDocumentSerde
+}
+
+func (*ResultItemMemberEBSResultItem) isResultItem() {}
+
+// These are items returned in the search results of an Amazon S3 search.
+type ResultItemMemberS3ResultItem struct {
+ Value S3ResultItem
+
+ noSmithyDocumentSerde
+}
+
+func (*ResultItemMemberS3ResultItem) isResultItem() {}
+
+// This specification contains a required string of the destination bucket;
+// optionally, you can include the destination prefix.
+type S3ExportSpecification struct {
+
+ // This specifies the destination Amazon S3 bucket for the export job.
+ //
+ // This member is required.
+ DestinationBucket *string
+
+ // This specifies the prefix for the destination Amazon S3 bucket for the export
+ // job.
+ DestinationPrefix *string
+
+ noSmithyDocumentSerde
+}
+
+// This contains arrays of objects, which may include ObjectKeys, Sizes,
+// CreationTimes, VersionIds, and/or Etags.
+type S3ItemFilter struct {
+
+ // You can include 1 to 10 values.
+ //
+ // If one value is included, the results will return only items that match the
+ // value.
+ //
+ // If more than one value is included, the results will return all items that
+ // match any of the values.
+ CreationTimes []TimeCondition
+
+ // You can include 1 to 10 values.
+ //
+ // If one value is included, the results will return only items that match the
+ // value.
+ //
+ // If more than one value is included, the results will return all items that
+ // match any of the values.
+ ETags []StringCondition
+
+ // You can include 1 to 10 values.
+ //
+ // If one value is included, the results will return only items that match the
+ // value.
+ //
+ // If more than one value is included, the results will return all items that
+ // match any of the values.
+ ObjectKeys []StringCondition
+
+ // You can include 1 to 10 values.
+ //
+ // If one value is included, the results will return only items that match the
+ // value.
+ //
+ // If more than one value is included, the results will return all items that
+ // match any of the values.
+ Sizes []LongCondition
+
+ // You can include 1 to 10 values.
+ //
+ // If one value is included, the results will return only items that match the
+ // value.
+ //
+ // If more than one value is included, the results will return all items that
+ // match any of the values.
+ VersionIds []StringCondition
+
+ noSmithyDocumentSerde
+}
+
+// These are the items returned in the results of a search of Amazon S3 backup
+// metadata.
+type S3ResultItem struct {
+
+ // These are items in the returned results that match recovery point Amazon
+ // Resource Names (ARN) input during a search of Amazon S3 backup metadata.
+ BackupResourceArn *string
+
+ // The name of the backup vault.
+ BackupVaultName *string
+
+ // These are one or more items in the returned results that match values for item
+ // creation time input during a search of Amazon S3 backup metadata.
+ CreationTime *time.Time
+
+ // These are one or more items in the returned results that match values for ETags
+ // input during a search of Amazon S3 backup metadata.
+ ETag *string
+
+ // This is one or more items returned in the results of a search of Amazon S3
+ // backup metadata that match the values input for object key.
+ ObjectKey *string
+
+ // These are items in the returned results that match values for object size(s)
+ // input during a search of Amazon S3 backup metadata.
+ ObjectSize *int64
+
+ // These are items in the returned results that match source Amazon Resource Names
+ // (ARN) input during a search of Amazon S3 backup metadata.
+ SourceResourceArn *string
+
+ // These are one or more items in the returned results that match values for
+ // version IDs input during a search of Amazon S3 backup metadata.
+ VersionId *string
+
+ noSmithyDocumentSerde
+}
+
+// This contains the information about recovery points returned in results of a
+// search job.
+type SearchJobBackupsResult struct {
+
+ // This is the creation time of the backup (recovery point).
+ BackupCreationTime *time.Time
+
+ // The Amazon Resource Name (ARN) that uniquely identifies the backup resources.
+ BackupResourceArn *string
+
+ // This is the creation time of the backup index.
+ IndexCreationTime *time.Time
+
+ // This is the resource type of the search.
+ ResourceType ResourceType
+
+ // The Amazon Resource Name (ARN) that uniquely identifies the source resources.
+ SourceResourceArn *string
+
+ // This is the status of the search job backup result.
+ Status SearchJobState
+
+ // This is the status message included with the results.
+ StatusMessage *string
+
+ noSmithyDocumentSerde
+}
+
+// This is information pertaining to a search job.
+type SearchJobSummary struct {
+
+ // This is the completion time of the search job.
+ CompletionTime *time.Time
+
+ // This is the creation time of the search job.
+ CreationTime *time.Time
+
+ // This is the name of the search job.
+ Name *string
+
+ // The unique string that identifies the Amazon Resource Name (ARN) of the
+ // specified search job.
+ SearchJobArn *string
+
+ // The unique string that specifies the search job.
+ SearchJobIdentifier *string
+
+ // Returned summary of the specified search job scope, including:
+ //
+ // - TotalBackupsToScanCount, the number of recovery points returned by the
+ // search.
+ //
+ // - TotalItemsToScanCount, the number of items returned by the search.
+ SearchScopeSummary *SearchScopeSummary
+
+ // This is the status of the search job.
+ Status SearchJobState
+
+ // A status message will be returned for either a earch job with a status of
+ // ERRORED or a status of COMPLETED jobs with issues.
+ //
+ // For example, a message may say that a search contained recovery points unable
+ // to be scanned because of a permissions issue.
+ StatusMessage *string
+
+ noSmithyDocumentSerde
+}
+
+// The search scope is all backup properties input into a search.
+type SearchScope struct {
+
+ // The resource types included in a search.
+ //
+ // Eligible resource types include S3 and EBS.
+ //
+ // This member is required.
+ BackupResourceTypes []ResourceType
+
+ // The Amazon Resource Name (ARN) that uniquely identifies the backup resources.
+ BackupResourceArns []string
+
+ // This is the time a backup resource was created.
+ BackupResourceCreationTime *BackupCreationTimeFilter
+
+ // These are one or more tags on the backup (recovery point).
+ BackupResourceTags map[string]*string
+
+ // The Amazon Resource Name (ARN) that uniquely identifies the source resources.
+ SourceResourceArns []string
+
+ noSmithyDocumentSerde
+}
+
+// The summary of the specified search job scope, including:
+//
+// - TotalBackupsToScanCount, the number of recovery points returned by the
+// search.
+//
+// - TotalItemsToScanCount, the number of items returned by the search.
+type SearchScopeSummary struct {
+
+ // This is the count of the total number of items that will be scanned in a search.
+ TotalItemsToScanCount *int64
+
+ // This is the count of the total number of backups that will be scanned in a
+ // search.
+ TotalRecoveryPointsToScanCount *int32
+
+ noSmithyDocumentSerde
+}
+
+// This contains the value of the string and can contain one or more operators.
+type StringCondition struct {
+
+ // The value of the string.
+ //
+ // This member is required.
+ Value *string
+
+ // A string that defines what values will be returned.
+ //
+ // If this is included, avoid combinations of operators that will return all
+ // possible values. For example, including both EQUALS_TO and NOT_EQUALS_TO with a
+ // value of 4 will return all values.
+ Operator StringConditionOperator
+
+ noSmithyDocumentSerde
+}
+
+// A time condition denotes a creation time, last modification time, or other time.
+type TimeCondition struct {
+
+ // This is the timestamp value of the time condition.
+ //
+ // This member is required.
+ Value *time.Time
+
+ // A string that defines what values will be returned.
+ //
+ // If this is included, avoid combinations of operators that will return all
+ // possible values. For example, including both EQUALS_TO and NOT_EQUALS_TO with a
+ // value of 4 will return all values.
+ Operator TimeConditionOperator
+
+ noSmithyDocumentSerde
+}
+
+type noSmithyDocumentSerde = smithydocument.NoSerde
+
+// UnknownUnionMember is returned when a union member is returned over the wire,
+// but has an unknown tag.
+type UnknownUnionMember struct {
+ Tag string
+ Value []byte
+
+ noSmithyDocumentSerde
+}
+
+func (*UnknownUnionMember) isExportSpecification() {}
+func (*UnknownUnionMember) isResultItem() {}
diff --git a/service/backupsearch/types/types_exported_test.go b/service/backupsearch/types/types_exported_test.go
new file mode 100644
index 00000000000..7137cf4ccff
--- /dev/null
+++ b/service/backupsearch/types/types_exported_test.go
@@ -0,0 +1,48 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package types_test
+
+import (
+ "fmt"
+ "github.com/aws/aws-sdk-go-v2/service/backupsearch/types"
+)
+
+func ExampleExportSpecification_outputUsage() {
+ var union types.ExportSpecification
+ // type switches can be used to check the union value
+ switch v := union.(type) {
+ case *types.ExportSpecificationMemberS3ExportSpecification:
+ _ = v.Value // Value is types.S3ExportSpecification
+
+ case *types.UnknownUnionMember:
+ fmt.Println("unknown tag:", v.Tag)
+
+ default:
+ fmt.Println("union is nil or unknown type")
+
+ }
+}
+
+var _ *types.S3ExportSpecification
+
+func ExampleResultItem_outputUsage() {
+ var union types.ResultItem
+ // type switches can be used to check the union value
+ switch v := union.(type) {
+ case *types.ResultItemMemberEBSResultItem:
+ _ = v.Value // Value is types.EBSResultItem
+
+ case *types.ResultItemMemberS3ResultItem:
+ _ = v.Value // Value is types.S3ResultItem
+
+ case *types.UnknownUnionMember:
+ fmt.Println("unknown tag:", v.Tag)
+
+ default:
+ fmt.Println("union is nil or unknown type")
+
+ }
+}
+
+var _ *types.S3ResultItem
+var _ *types.EBSResultItem
diff --git a/service/backupsearch/validators.go b/service/backupsearch/validators.go
new file mode 100644
index 00000000000..5ebd35916c3
--- /dev/null
+++ b/service/backupsearch/validators.go
@@ -0,0 +1,693 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package backupsearch
+
+import (
+ "context"
+ "fmt"
+ "github.com/aws/aws-sdk-go-v2/service/backupsearch/types"
+ smithy "github.com/aws/smithy-go"
+ "github.com/aws/smithy-go/middleware"
+)
+
+type validateOpGetSearchJob struct {
+}
+
+func (*validateOpGetSearchJob) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpGetSearchJob) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*GetSearchJobInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpGetSearchJobInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+type validateOpGetSearchResultExportJob struct {
+}
+
+func (*validateOpGetSearchResultExportJob) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpGetSearchResultExportJob) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*GetSearchResultExportJobInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpGetSearchResultExportJobInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+type validateOpListSearchJobBackups struct {
+}
+
+func (*validateOpListSearchJobBackups) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpListSearchJobBackups) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*ListSearchJobBackupsInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpListSearchJobBackupsInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+type validateOpListSearchJobResults struct {
+}
+
+func (*validateOpListSearchJobResults) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpListSearchJobResults) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*ListSearchJobResultsInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpListSearchJobResultsInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+type validateOpListTagsForResource struct {
+}
+
+func (*validateOpListTagsForResource) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpListTagsForResource) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*ListTagsForResourceInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpListTagsForResourceInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+type validateOpStartSearchJob struct {
+}
+
+func (*validateOpStartSearchJob) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpStartSearchJob) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*StartSearchJobInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpStartSearchJobInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+type validateOpStartSearchResultExportJob struct {
+}
+
+func (*validateOpStartSearchResultExportJob) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpStartSearchResultExportJob) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*StartSearchResultExportJobInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpStartSearchResultExportJobInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+type validateOpStopSearchJob struct {
+}
+
+func (*validateOpStopSearchJob) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpStopSearchJob) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*StopSearchJobInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpStopSearchJobInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+type validateOpTagResource struct {
+}
+
+func (*validateOpTagResource) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpTagResource) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*TagResourceInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpTagResourceInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+type validateOpUntagResource struct {
+}
+
+func (*validateOpUntagResource) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpUntagResource) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*UntagResourceInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpUntagResourceInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+func addOpGetSearchJobValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpGetSearchJob{}, middleware.After)
+}
+
+func addOpGetSearchResultExportJobValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpGetSearchResultExportJob{}, middleware.After)
+}
+
+func addOpListSearchJobBackupsValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpListSearchJobBackups{}, middleware.After)
+}
+
+func addOpListSearchJobResultsValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpListSearchJobResults{}, middleware.After)
+}
+
+func addOpListTagsForResourceValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpListTagsForResource{}, middleware.After)
+}
+
+func addOpStartSearchJobValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpStartSearchJob{}, middleware.After)
+}
+
+func addOpStartSearchResultExportJobValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpStartSearchResultExportJob{}, middleware.After)
+}
+
+func addOpStopSearchJobValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpStopSearchJob{}, middleware.After)
+}
+
+func addOpTagResourceValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpTagResource{}, middleware.After)
+}
+
+func addOpUntagResourceValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpUntagResource{}, middleware.After)
+}
+
+func validateEBSItemFilter(v *types.EBSItemFilter) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "EBSItemFilter"}
+ if v.FilePaths != nil {
+ if err := validateStringConditionList(v.FilePaths); err != nil {
+ invalidParams.AddNested("FilePaths", err.(smithy.InvalidParamsError))
+ }
+ }
+ if v.Sizes != nil {
+ if err := validateLongConditionList(v.Sizes); err != nil {
+ invalidParams.AddNested("Sizes", err.(smithy.InvalidParamsError))
+ }
+ }
+ if v.CreationTimes != nil {
+ if err := validateTimeConditionList(v.CreationTimes); err != nil {
+ invalidParams.AddNested("CreationTimes", err.(smithy.InvalidParamsError))
+ }
+ }
+ if v.LastModificationTimes != nil {
+ if err := validateTimeConditionList(v.LastModificationTimes); err != nil {
+ invalidParams.AddNested("LastModificationTimes", err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateEBSItemFilters(v []types.EBSItemFilter) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "EBSItemFilters"}
+ for i := range v {
+ if err := validateEBSItemFilter(&v[i]); err != nil {
+ invalidParams.AddNested(fmt.Sprintf("[%d]", i), err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateExportSpecification(v types.ExportSpecification) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "ExportSpecification"}
+ switch uv := v.(type) {
+ case *types.ExportSpecificationMemberS3ExportSpecification:
+ if err := validateS3ExportSpecification(&uv.Value); err != nil {
+ invalidParams.AddNested("[s3ExportSpecification]", err.(smithy.InvalidParamsError))
+ }
+
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateItemFilters(v *types.ItemFilters) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "ItemFilters"}
+ if v.S3ItemFilters != nil {
+ if err := validateS3ItemFilters(v.S3ItemFilters); err != nil {
+ invalidParams.AddNested("S3ItemFilters", err.(smithy.InvalidParamsError))
+ }
+ }
+ if v.EBSItemFilters != nil {
+ if err := validateEBSItemFilters(v.EBSItemFilters); err != nil {
+ invalidParams.AddNested("EBSItemFilters", err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateLongCondition(v *types.LongCondition) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "LongCondition"}
+ if v.Value == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("Value"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateLongConditionList(v []types.LongCondition) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "LongConditionList"}
+ for i := range v {
+ if err := validateLongCondition(&v[i]); err != nil {
+ invalidParams.AddNested(fmt.Sprintf("[%d]", i), err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateS3ExportSpecification(v *types.S3ExportSpecification) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "S3ExportSpecification"}
+ if v.DestinationBucket == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("DestinationBucket"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateS3ItemFilter(v *types.S3ItemFilter) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "S3ItemFilter"}
+ if v.ObjectKeys != nil {
+ if err := validateStringConditionList(v.ObjectKeys); err != nil {
+ invalidParams.AddNested("ObjectKeys", err.(smithy.InvalidParamsError))
+ }
+ }
+ if v.Sizes != nil {
+ if err := validateLongConditionList(v.Sizes); err != nil {
+ invalidParams.AddNested("Sizes", err.(smithy.InvalidParamsError))
+ }
+ }
+ if v.CreationTimes != nil {
+ if err := validateTimeConditionList(v.CreationTimes); err != nil {
+ invalidParams.AddNested("CreationTimes", err.(smithy.InvalidParamsError))
+ }
+ }
+ if v.VersionIds != nil {
+ if err := validateStringConditionList(v.VersionIds); err != nil {
+ invalidParams.AddNested("VersionIds", err.(smithy.InvalidParamsError))
+ }
+ }
+ if v.ETags != nil {
+ if err := validateStringConditionList(v.ETags); err != nil {
+ invalidParams.AddNested("ETags", err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateS3ItemFilters(v []types.S3ItemFilter) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "S3ItemFilters"}
+ for i := range v {
+ if err := validateS3ItemFilter(&v[i]); err != nil {
+ invalidParams.AddNested(fmt.Sprintf("[%d]", i), err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateSearchScope(v *types.SearchScope) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "SearchScope"}
+ if v.BackupResourceTypes == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("BackupResourceTypes"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateStringCondition(v *types.StringCondition) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "StringCondition"}
+ if v.Value == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("Value"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateStringConditionList(v []types.StringCondition) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "StringConditionList"}
+ for i := range v {
+ if err := validateStringCondition(&v[i]); err != nil {
+ invalidParams.AddNested(fmt.Sprintf("[%d]", i), err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateTimeCondition(v *types.TimeCondition) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "TimeCondition"}
+ if v.Value == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("Value"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateTimeConditionList(v []types.TimeCondition) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "TimeConditionList"}
+ for i := range v {
+ if err := validateTimeCondition(&v[i]); err != nil {
+ invalidParams.AddNested(fmt.Sprintf("[%d]", i), err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpGetSearchJobInput(v *GetSearchJobInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "GetSearchJobInput"}
+ if v.SearchJobIdentifier == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("SearchJobIdentifier"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpGetSearchResultExportJobInput(v *GetSearchResultExportJobInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "GetSearchResultExportJobInput"}
+ if v.ExportJobIdentifier == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ExportJobIdentifier"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpListSearchJobBackupsInput(v *ListSearchJobBackupsInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "ListSearchJobBackupsInput"}
+ if v.SearchJobIdentifier == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("SearchJobIdentifier"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpListSearchJobResultsInput(v *ListSearchJobResultsInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "ListSearchJobResultsInput"}
+ if v.SearchJobIdentifier == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("SearchJobIdentifier"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpListTagsForResourceInput(v *ListTagsForResourceInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "ListTagsForResourceInput"}
+ if v.ResourceArn == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ResourceArn"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpStartSearchJobInput(v *StartSearchJobInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "StartSearchJobInput"}
+ if v.SearchScope == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("SearchScope"))
+ } else if v.SearchScope != nil {
+ if err := validateSearchScope(v.SearchScope); err != nil {
+ invalidParams.AddNested("SearchScope", err.(smithy.InvalidParamsError))
+ }
+ }
+ if v.ItemFilters != nil {
+ if err := validateItemFilters(v.ItemFilters); err != nil {
+ invalidParams.AddNested("ItemFilters", err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpStartSearchResultExportJobInput(v *StartSearchResultExportJobInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "StartSearchResultExportJobInput"}
+ if v.SearchJobIdentifier == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("SearchJobIdentifier"))
+ }
+ if v.ExportSpecification == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ExportSpecification"))
+ } else if v.ExportSpecification != nil {
+ if err := validateExportSpecification(v.ExportSpecification); err != nil {
+ invalidParams.AddNested("ExportSpecification", err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpStopSearchJobInput(v *StopSearchJobInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "StopSearchJobInput"}
+ if v.SearchJobIdentifier == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("SearchJobIdentifier"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpTagResourceInput(v *TagResourceInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "TagResourceInput"}
+ if v.ResourceArn == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ResourceArn"))
+ }
+ if v.Tags == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("Tags"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpUntagResourceInput(v *UntagResourceInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "UntagResourceInput"}
+ if v.ResourceArn == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ResourceArn"))
+ }
+ if v.TagKeys == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("TagKeys"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
diff --git a/service/batch/CHANGELOG.md b/service/batch/CHANGELOG.md
index 0212069ea79..85335bfcf49 100644
--- a/service/batch/CHANGELOG.md
+++ b/service/batch/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.49.0 (2024-12-17)
+
+* **Feature**: This feature allows AWS Batch on Amazon EKS to support configuration of Pod Annotations, overriding Namespace on which the Batch job's Pod runs on, and allows Subpath and Persistent Volume claim to be set for AWS Batch on Amazon EKS jobs.
+
# v1.48.2 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/batch/deserializers.go b/service/batch/deserializers.go
index 5eee3dbd52e..0fa4061b177 100644
--- a/service/batch/deserializers.go
+++ b/service/batch/deserializers.go
@@ -5663,6 +5663,42 @@ func awsRestjson1_deserializeDocumentEFSVolumeConfiguration(v **types.EFSVolumeC
return nil
}
+func awsRestjson1_deserializeDocumentEksAnnotationsMap(v *map[string]string, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var mv map[string]string
+ if *v == nil {
+ mv = map[string]string{}
+ } else {
+ mv = *v
+ }
+
+ for key, value := range shape {
+ var parsedVal string
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ parsedVal = jtv
+ }
+ mv[key] = parsedVal
+
+ }
+ *v = mv
+ return nil
+}
+
func awsRestjson1_deserializeDocumentEksAttemptContainerDetail(v **types.EksAttemptContainerDetail, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -6495,6 +6531,15 @@ func awsRestjson1_deserializeDocumentEksContainerVolumeMount(v **types.EksContai
sv.ReadOnly = ptr.Bool(jtv)
}
+ case "subPath":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.SubPath = ptr.String(jtv)
+ }
+
default:
_, _ = key, value
@@ -6721,11 +6766,74 @@ func awsRestjson1_deserializeDocumentEksMetadata(v **types.EksMetadata, value in
for key, value := range shape {
switch key {
+ case "annotations":
+ if err := awsRestjson1_deserializeDocumentEksAnnotationsMap(&sv.Annotations, value); err != nil {
+ return err
+ }
+
case "labels":
if err := awsRestjson1_deserializeDocumentEksLabelsMap(&sv.Labels, value); err != nil {
return err
}
+ case "namespace":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.Namespace = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentEksPersistentVolumeClaim(v **types.EksPersistentVolumeClaim, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.EksPersistentVolumeClaim
+ if *v == nil {
+ sv = &types.EksPersistentVolumeClaim{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "claimName":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.ClaimName = ptr.String(jtv)
+ }
+
+ case "readOnly":
+ if value != nil {
+ jtv, ok := value.(bool)
+ if !ok {
+ return fmt.Errorf("expected Boolean to be of type *bool, got %T instead", value)
+ }
+ sv.ReadOnly = ptr.Bool(jtv)
+ }
+
default:
_, _ = key, value
@@ -7135,6 +7243,11 @@ func awsRestjson1_deserializeDocumentEksVolume(v **types.EksVolume, value interf
sv.Name = ptr.String(jtv)
}
+ case "persistentVolumeClaim":
+ if err := awsRestjson1_deserializeDocumentEksPersistentVolumeClaim(&sv.PersistentVolumeClaim, value); err != nil {
+ return err
+ }
+
case "secret":
if err := awsRestjson1_deserializeDocumentEksSecret(&sv.Secret, value); err != nil {
return err
diff --git a/service/batch/go_module_metadata.go b/service/batch/go_module_metadata.go
index 95068866d33..be6d54815d3 100644
--- a/service/batch/go_module_metadata.go
+++ b/service/batch/go_module_metadata.go
@@ -3,4 +3,4 @@
package batch
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.48.2"
+const goModuleVersion = "1.49.0"
diff --git a/service/batch/serializers.go b/service/batch/serializers.go
index 4a3abbf6dbe..aa96a28b63e 100644
--- a/service/batch/serializers.go
+++ b/service/batch/serializers.go
@@ -3150,6 +3150,17 @@ func awsRestjson1_serializeDocumentEFSVolumeConfiguration(v *types.EFSVolumeConf
return nil
}
+func awsRestjson1_serializeDocumentEksAnnotationsMap(v map[string]string, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ for key := range v {
+ om := object.Key(key)
+ om.String(v[key])
+ }
+ return nil
+}
+
func awsRestjson1_serializeDocumentEksConfiguration(v *types.EksConfiguration, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
@@ -3409,6 +3420,11 @@ func awsRestjson1_serializeDocumentEksContainerVolumeMount(v *types.EksContainer
ok.Boolean(*v.ReadOnly)
}
+ if v.SubPath != nil {
+ ok := object.Key("subPath")
+ ok.String(*v.SubPath)
+ }
+
return nil
}
@@ -3480,6 +3496,13 @@ func awsRestjson1_serializeDocumentEksMetadata(v *types.EksMetadata, value smith
object := value.Object()
defer object.Close()
+ if v.Annotations != nil {
+ ok := object.Key("annotations")
+ if err := awsRestjson1_serializeDocumentEksAnnotationsMap(v.Annotations, ok); err != nil {
+ return err
+ }
+ }
+
if v.Labels != nil {
ok := object.Key("labels")
if err := awsRestjson1_serializeDocumentEksLabelsMap(v.Labels, ok); err != nil {
@@ -3487,6 +3510,28 @@ func awsRestjson1_serializeDocumentEksMetadata(v *types.EksMetadata, value smith
}
}
+ if v.Namespace != nil {
+ ok := object.Key("namespace")
+ ok.String(*v.Namespace)
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentEksPersistentVolumeClaim(v *types.EksPersistentVolumeClaim, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.ClaimName != nil {
+ ok := object.Key("claimName")
+ ok.String(*v.ClaimName)
+ }
+
+ if v.ReadOnly != nil {
+ ok := object.Key("readOnly")
+ ok.Boolean(*v.ReadOnly)
+ }
+
return nil
}
@@ -3659,6 +3704,13 @@ func awsRestjson1_serializeDocumentEksVolume(v *types.EksVolume, value smithyjso
ok.String(*v.Name)
}
+ if v.PersistentVolumeClaim != nil {
+ ok := object.Key("persistentVolumeClaim")
+ if err := awsRestjson1_serializeDocumentEksPersistentVolumeClaim(v.PersistentVolumeClaim, ok); err != nil {
+ return err
+ }
+ }
+
if v.Secret != nil {
ok := object.Key("secret")
if err := awsRestjson1_serializeDocumentEksSecret(v.Secret, ok); err != nil {
diff --git a/service/batch/types/types.go b/service/batch/types/types.go
index 6c494404de9..a8c565a1850 100644
--- a/service/batch/types/types.go
+++ b/service/batch/types/types.go
@@ -2148,6 +2148,9 @@ type EksContainerVolumeMount struct {
// Otherwise, the container can write to the volume. The default value is false .
ReadOnly *bool
+ // A sub-path inside the referenced volume instead of its root.
+ SubPath *string
+
noSmithyDocumentSerde
}
@@ -2199,12 +2202,80 @@ type EksHostPath struct {
// [Understanding Kubernetes Objects]: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/
type EksMetadata struct {
+ // Key-value pairs used to attach arbitrary, non-identifying metadata to
+ // Kubernetes objects. Valid annotation keys have two segments: an optional prefix
+ // and a name, separated by a slash (/).
+ //
+ // - The prefix is optional and must be 253 characters or less. If specified,
+ // the prefix must be a DNS subdomain− a series of DNS labels separated by dots
+ // (.), and it must end with a slash (/).
+ //
+ // - The name segment is required and must be 63 characters or less. It can
+ // include alphanumeric characters ([a-z0-9A-Z]), dashes (-), underscores (_), and
+ // dots (.), but must begin and end with an alphanumeric character.
+ //
+ // Annotation values must be 255 characters or less.
+ //
+ // Annotations can be added or modified at any time. Each resource can have
+ // multiple annotations.
+ Annotations map[string]string
+
// Key-value pairs used to identify, sort, and organize cube resources. Can
// contain up to 63 uppercase letters, lowercase letters, numbers, hyphens (-), and
// underscores (_). Labels can be added or modified at any time. Each resource can
// have multiple labels, but each key must be unique for a given object.
Labels map[string]string
+ // The namespace of the Amazon EKS cluster. In Kubernetes, namespaces provide a
+ // mechanism for isolating groups of resources within a single cluster. Names of
+ // resources need to be unique within a namespace, but not across namespaces. Batch
+ // places Batch Job pods in this namespace. If this field is provided, the value
+ // can't be empty or null. It must meet the following requirements:
+ //
+ // - 1-63 characters long
+ //
+ // - Can't be set to default
+ //
+ // - Can't start with kube
+ //
+ // - Must match the following regular expression:
+ // ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$
+ //
+ // For more information, see [Namespaces] in the Kubernetes documentation. This namespace can
+ // be different from the kubernetesNamespace set in the compute environment's
+ // EksConfiguration , but must have identical role-based access control (RBAC)
+ // roles as the compute environment's kubernetesNamespace . For multi-node parallel
+ // jobs, the same value must be provided across all the node ranges.
+ //
+ // [Namespaces]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
+ Namespace *string
+
+ noSmithyDocumentSerde
+}
+
+// A persistentVolumeClaim volume is used to mount a [PersistentVolume] into a Pod.
+// PersistentVolumeClaims are a way for users to "claim" durable storage without
+// knowing the details of the particular cloud environment. See the information
+// about [PersistentVolumes]in the Kubernetes documentation.
+//
+// [PersistentVolumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
+// [PersistentVolume]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
+type EksPersistentVolumeClaim struct {
+
+ // The name of the persistentVolumeClaim bounded to a persistentVolume . For more
+ // information, see [Persistent Volume Claims]in the Kubernetes documentation.
+ //
+ // [Persistent Volume Claims]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
+ //
+ // This member is required.
+ ClaimName *string
+
+ // An optional boolean value indicating if the mount is read only. Default is
+ // false. For more information, see [Read Only Mounts]in the Kubernetes documentation.
+ //
+ // [Read Only Mounts]: https://kubernetes.io/docs/concepts/storage/volumes/#read-only-mounts
+ ReadOnly *bool
+
noSmithyDocumentSerde
}
@@ -2450,6 +2521,12 @@ type EksVolume struct {
// [hostPath]: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
HostPath *EksHostPath
+ // Specifies the configuration of a Kubernetes persistentVolumeClaim bounded to a
+ // persistentVolume . For more information, see [Persistent Volume Claims] in the Kubernetes documentation.
+ //
+ // [Persistent Volume Claims]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
+ PersistentVolumeClaim *EksPersistentVolumeClaim
+
// Specifies the configuration of a Kubernetes secret volume. For more
// information, see [secret]in the Kubernetes documentation.
//
diff --git a/service/batch/validators.go b/service/batch/validators.go
index 92649e24d93..eda2b29755b 100644
--- a/service/batch/validators.go
+++ b/service/batch/validators.go
@@ -902,6 +902,21 @@ func validateEksContainers(v []types.EksContainer) error {
}
}
+func validateEksPersistentVolumeClaim(v *types.EksPersistentVolumeClaim) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "EksPersistentVolumeClaim"}
+ if v.ClaimName == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ClaimName"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateEksPodProperties(v *types.EksPodProperties) error {
if v == nil {
return nil
@@ -1018,6 +1033,11 @@ func validateEksVolume(v *types.EksVolume) error {
invalidParams.AddNested("Secret", err.(smithy.InvalidParamsError))
}
}
+ if v.PersistentVolumeClaim != nil {
+ if err := validateEksPersistentVolumeClaim(v.PersistentVolumeClaim); err != nil {
+ invalidParams.AddNested("PersistentVolumeClaim", err.(smithy.InvalidParamsError))
+ }
+ }
if invalidParams.Len() > 0 {
return invalidParams
} else {
diff --git a/service/budgets/CHANGELOG.md b/service/budgets/CHANGELOG.md
index 5e5c3eb4139..57f1f102f7b 100644
--- a/service/budgets/CHANGELOG.md
+++ b/service/budgets/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.29.0 (2024-12-18)
+
+* **Feature**: Releasing minor partition endpoint updates
+
# v1.28.8 (2024-12-09)
* No change notes available for this release.
diff --git a/service/budgets/endpoints.go b/service/budgets/endpoints.go
index ad07416054a..9c54309c875 100644
--- a/service/budgets/endpoints.go
+++ b/service/budgets/endpoints.go
@@ -426,6 +426,74 @@ func (r *resolver) ResolveEndpoint(
}
}
}
+ if _PartitionResult.Name == "aws-iso" {
+ if _UseFIPS == false {
+ if _UseDualStack == false {
+ uriString := "https://budgets.c2s.ic.gov"
+
+ uri, err := url.Parse(uriString)
+ if err != nil {
+ return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+ }
+
+ return smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningName(&sp, "budgets")
+ smithyhttp.SetSigV4ASigningName(&sp, "budgets")
+
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-iso-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }, nil
+ }
+ }
+ }
+ if _PartitionResult.Name == "aws-iso-b" {
+ if _UseFIPS == false {
+ if _UseDualStack == false {
+ uriString := "https://budgets.global.sc2s.sgov.gov"
+
+ uri, err := url.Parse(uriString)
+ if err != nil {
+ return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+ }
+
+ return smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningName(&sp, "budgets")
+ smithyhttp.SetSigV4ASigningName(&sp, "budgets")
+
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-isob-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }, nil
+ }
+ }
+ }
if _PartitionResult.Name == "aws-iso-e" {
if _UseFIPS == false {
if _UseDualStack == false {
diff --git a/service/budgets/endpoints_test.go b/service/budgets/endpoints_test.go
index f521a1895c6..dccd34fa0b1 100644
--- a/service/budgets/endpoints_test.go
+++ b/service/budgets/endpoints_test.go
@@ -598,8 +598,61 @@ func TestEndpointCase13(t *testing.T) {
}
}
-// For region us-iso-east-1 with FIPS enabled and DualStack enabled
+// For region aws-iso-global with FIPS disabled and DualStack disabled
func TestEndpointCase14(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("aws-iso-global"),
+ UseFIPS: ptr.Bool(false),
+ UseDualStack: ptr.Bool(false),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://budgets.c2s.ic.gov")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningName(&sp, "budgets")
+ smithyhttp.SetSigV4ASigningName(&sp, "budgets")
+
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-iso-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region us-iso-east-1 with FIPS enabled and DualStack enabled
+func TestEndpointCase15(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-iso-east-1"),
UseFIPS: ptr.Bool(true),
@@ -619,7 +672,7 @@ func TestEndpointCase14(t *testing.T) {
}
// For region us-iso-east-1 with FIPS enabled and DualStack disabled
-func TestEndpointCase15(t *testing.T) {
+func TestEndpointCase16(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-iso-east-1"),
UseFIPS: ptr.Bool(true),
@@ -656,7 +709,7 @@ func TestEndpointCase15(t *testing.T) {
}
// For region us-iso-east-1 with FIPS disabled and DualStack enabled
-func TestEndpointCase16(t *testing.T) {
+func TestEndpointCase17(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-iso-east-1"),
UseFIPS: ptr.Bool(false),
@@ -676,7 +729,7 @@ func TestEndpointCase16(t *testing.T) {
}
// For region us-iso-east-1 with FIPS disabled and DualStack disabled
-func TestEndpointCase17(t *testing.T) {
+func TestEndpointCase18(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-iso-east-1"),
UseFIPS: ptr.Bool(false),
@@ -691,12 +744,81 @@ func TestEndpointCase17(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://budgets.us-iso-east-1.c2s.ic.gov")
+ uri, _ := url.Parse("https://budgets.c2s.ic.gov")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningName(&sp, "budgets")
+ smithyhttp.SetSigV4ASigningName(&sp, "budgets")
+
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-iso-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
+ }
+
+ if e, a := expectEndpoint.URI, result.URI; e != a {
+ t.Errorf("expect %v URI, got %v", e, a)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Headers, result.Headers) {
+ t.Errorf("expect headers to match\n%v != %v", expectEndpoint.Headers, result.Headers)
+ }
+
+ if !reflect.DeepEqual(expectEndpoint.Properties, result.Properties) {
+ t.Errorf("expect properties to match\n%v != %v", expectEndpoint.Properties, result.Properties)
+ }
+}
+
+// For region aws-iso-b-global with FIPS disabled and DualStack disabled
+func TestEndpointCase19(t *testing.T) {
+ var params = EndpointParameters{
+ Region: ptr.String("aws-iso-b-global"),
+ UseFIPS: ptr.Bool(false),
+ UseDualStack: ptr.Bool(false),
+ }
+
+ resolver := NewDefaultEndpointResolverV2()
+ result, err := resolver.ResolveEndpoint(context.Background(), params)
+ _, _ = result, err
+
+ if err != nil {
+ t.Fatalf("expect no error, got %v", err)
+ }
+
+ uri, _ := url.Parse("https://budgets.global.sc2s.sgov.gov")
+
+ expectEndpoint := smithyendpoints.Endpoint{
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningName(&sp, "budgets")
+ smithyhttp.SetSigV4ASigningName(&sp, "budgets")
+
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-isob-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -713,7 +835,7 @@ func TestEndpointCase17(t *testing.T) {
}
// For region us-isob-east-1 with FIPS enabled and DualStack enabled
-func TestEndpointCase18(t *testing.T) {
+func TestEndpointCase20(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-isob-east-1"),
UseFIPS: ptr.Bool(true),
@@ -733,7 +855,7 @@ func TestEndpointCase18(t *testing.T) {
}
// For region us-isob-east-1 with FIPS enabled and DualStack disabled
-func TestEndpointCase19(t *testing.T) {
+func TestEndpointCase21(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-isob-east-1"),
UseFIPS: ptr.Bool(true),
@@ -770,7 +892,7 @@ func TestEndpointCase19(t *testing.T) {
}
// For region us-isob-east-1 with FIPS disabled and DualStack enabled
-func TestEndpointCase20(t *testing.T) {
+func TestEndpointCase22(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-isob-east-1"),
UseFIPS: ptr.Bool(false),
@@ -790,7 +912,7 @@ func TestEndpointCase20(t *testing.T) {
}
// For region us-isob-east-1 with FIPS disabled and DualStack disabled
-func TestEndpointCase21(t *testing.T) {
+func TestEndpointCase23(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-isob-east-1"),
UseFIPS: ptr.Bool(false),
@@ -805,12 +927,28 @@ func TestEndpointCase21(t *testing.T) {
t.Fatalf("expect no error, got %v", err)
}
- uri, _ := url.Parse("https://budgets.us-isob-east-1.sc2s.sgov.gov")
+ uri, _ := url.Parse("https://budgets.global.sc2s.sgov.gov")
expectEndpoint := smithyendpoints.Endpoint{
- URI: *uri,
- Headers: http.Header{},
- Properties: smithy.Properties{},
+ URI: *uri,
+ Headers: http.Header{},
+ Properties: func() smithy.Properties {
+ var out smithy.Properties
+ smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+ {
+ SchemeID: "aws.auth#sigv4",
+ SignerProperties: func() smithy.Properties {
+ var sp smithy.Properties
+ smithyhttp.SetSigV4SigningName(&sp, "budgets")
+ smithyhttp.SetSigV4ASigningName(&sp, "budgets")
+
+ smithyhttp.SetSigV4SigningRegion(&sp, "us-isob-east-1")
+ return sp
+ }(),
+ },
+ })
+ return out
+ }(),
}
if e, a := expectEndpoint.URI, result.URI; e != a {
@@ -827,7 +965,7 @@ func TestEndpointCase21(t *testing.T) {
}
// For region eu-isoe-west-1 with FIPS disabled and DualStack disabled
-func TestEndpointCase22(t *testing.T) {
+func TestEndpointCase24(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("eu-isoe-west-1"),
UseFIPS: ptr.Bool(false),
@@ -880,7 +1018,7 @@ func TestEndpointCase22(t *testing.T) {
}
// For region us-isof-south-1 with FIPS disabled and DualStack disabled
-func TestEndpointCase23(t *testing.T) {
+func TestEndpointCase25(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-isof-south-1"),
UseFIPS: ptr.Bool(false),
@@ -933,7 +1071,7 @@ func TestEndpointCase23(t *testing.T) {
}
// For custom endpoint with region set and fips disabled and dualstack disabled
-func TestEndpointCase24(t *testing.T) {
+func TestEndpointCase26(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-east-1"),
UseFIPS: ptr.Bool(false),
@@ -971,7 +1109,7 @@ func TestEndpointCase24(t *testing.T) {
}
// For custom endpoint with region not set and fips disabled and dualstack disabled
-func TestEndpointCase25(t *testing.T) {
+func TestEndpointCase27(t *testing.T) {
var params = EndpointParameters{
UseFIPS: ptr.Bool(false),
UseDualStack: ptr.Bool(false),
@@ -1008,7 +1146,7 @@ func TestEndpointCase25(t *testing.T) {
}
// For custom endpoint with fips enabled and dualstack disabled
-func TestEndpointCase26(t *testing.T) {
+func TestEndpointCase28(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-east-1"),
UseFIPS: ptr.Bool(true),
@@ -1029,7 +1167,7 @@ func TestEndpointCase26(t *testing.T) {
}
// For custom endpoint with fips disabled and dualstack enabled
-func TestEndpointCase27(t *testing.T) {
+func TestEndpointCase29(t *testing.T) {
var params = EndpointParameters{
Region: ptr.String("us-east-1"),
UseFIPS: ptr.Bool(false),
@@ -1050,7 +1188,7 @@ func TestEndpointCase27(t *testing.T) {
}
// Missing region
-func TestEndpointCase28(t *testing.T) {
+func TestEndpointCase30(t *testing.T) {
var params = EndpointParameters{}
resolver := NewDefaultEndpointResolverV2()
diff --git a/service/budgets/go_module_metadata.go b/service/budgets/go_module_metadata.go
index dc1c6969970..b18592c764e 100644
--- a/service/budgets/go_module_metadata.go
+++ b/service/budgets/go_module_metadata.go
@@ -3,4 +3,4 @@
package budgets
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.28.8"
+const goModuleVersion = "1.29.0"
diff --git a/service/cleanroomsml/CHANGELOG.md b/service/cleanroomsml/CHANGELOG.md
index a395cf8a058..ff56af14aa2 100644
--- a/service/cleanroomsml/CHANGELOG.md
+++ b/service/cleanroomsml/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.11.0 (2024-12-17)
+
+* **Feature**: Add support for SQL compute configuration for StartAudienceGenerationJob API.
+
# v1.10.2 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/cleanroomsml/api_op_StartTrainedModelInferenceJob.go b/service/cleanroomsml/api_op_StartTrainedModelInferenceJob.go
index d49655c1fed..0dd628a5acf 100644
--- a/service/cleanroomsml/api_op_StartTrainedModelInferenceJob.go
+++ b/service/cleanroomsml/api_op_StartTrainedModelInferenceJob.go
@@ -29,7 +29,7 @@ func (c *Client) StartTrainedModelInferenceJob(ctx context.Context, params *Star
type StartTrainedModelInferenceJobInput struct {
- // Defines he data source that is used for the trained model inference job.
+ // Defines the data source that is used for the trained model inference job.
//
// This member is required.
DataSource *types.ModelInferenceDataSource
diff --git a/service/cleanroomsml/deserializers.go b/service/cleanroomsml/deserializers.go
index f4ee7907678..b14bbdacc90 100644
--- a/service/cleanroomsml/deserializers.go
+++ b/service/cleanroomsml/deserializers.go
@@ -10453,6 +10453,11 @@ func awsRestjson1_deserializeDocumentAudienceGenerationJobDataSource(v **types.A
sv.RoleArn = ptr.String(jtv)
}
+ case "sqlComputeConfiguration":
+ if err := awsRestjson1_deserializeDocumentComputeConfiguration(&sv.SqlComputeConfiguration, value); err != nil {
+ return err
+ }
+
case "sqlParameters":
if err := awsRestjson1_deserializeDocumentProtectedQuerySQLParameters(&sv.SqlParameters, value); err != nil {
return err
diff --git a/service/cleanroomsml/go_module_metadata.go b/service/cleanroomsml/go_module_metadata.go
index 14304cd9a1d..296e6f1cc90 100644
--- a/service/cleanroomsml/go_module_metadata.go
+++ b/service/cleanroomsml/go_module_metadata.go
@@ -3,4 +3,4 @@
package cleanroomsml
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.10.2"
+const goModuleVersion = "1.11.0"
diff --git a/service/cleanroomsml/serializers.go b/service/cleanroomsml/serializers.go
index 3c70837847b..cd3d0df965d 100644
--- a/service/cleanroomsml/serializers.go
+++ b/service/cleanroomsml/serializers.go
@@ -5088,6 +5088,13 @@ func awsRestjson1_serializeDocumentAudienceGenerationJobDataSource(v *types.Audi
ok.String(*v.RoleArn)
}
+ if v.SqlComputeConfiguration != nil {
+ ok := object.Key("sqlComputeConfiguration")
+ if err := awsRestjson1_serializeDocumentComputeConfiguration(v.SqlComputeConfiguration, ok); err != nil {
+ return err
+ }
+ }
+
if v.SqlParameters != nil {
ok := object.Key("sqlParameters")
if err := awsRestjson1_serializeDocumentProtectedQuerySQLParameters(v.SqlParameters, ok); err != nil {
diff --git a/service/cleanroomsml/types/types.go b/service/cleanroomsml/types/types.go
index 0fab7843ab0..4560945b92d 100644
--- a/service/cleanroomsml/types/types.go
+++ b/service/cleanroomsml/types/types.go
@@ -85,6 +85,10 @@ type AudienceGenerationJobDataSource struct {
// ...
DataSource *S3ConfigMap
+ // Provides configuration information for the instances that will perform the
+ // compute work.
+ SqlComputeConfiguration ComputeConfiguration
+
// The protected SQL query parameters.
SqlParameters *ProtectedQuerySQLParameters
diff --git a/service/cloud9/CHANGELOG.md b/service/cloud9/CHANGELOG.md
index 5f72b397b14..177678cb35f 100644
--- a/service/cloud9/CHANGELOG.md
+++ b/service/cloud9/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.28.8 (2024-12-16)
+
+* **Documentation**: Added information about Ubuntu 18.04 will be removed from the available imageIds for Cloud9 because Ubuntu 18.04 has ended standard support on May 31, 2023.
+
# v1.28.7 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/cloud9/api_op_CreateEnvironmentEC2.go b/service/cloud9/api_op_CreateEnvironmentEC2.go
index 0498f03462c..072328af937 100644
--- a/service/cloud9/api_op_CreateEnvironmentEC2.go
+++ b/service/cloud9/api_op_CreateEnvironmentEC2.go
@@ -14,6 +14,11 @@ import (
// Creates an Cloud9 development environment, launches an Amazon Elastic Compute
// Cloud (Amazon EC2) instance, and then connects from the instance to the
// environment.
+//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
func (c *Client) CreateEnvironmentEC2(ctx context.Context, params *CreateEnvironmentEC2Input, optFns ...func(*Options)) (*CreateEnvironmentEC2Output, error) {
if params == nil {
params = &CreateEnvironmentEC2Input{}
@@ -35,15 +40,14 @@ type CreateEnvironmentEC2Input struct {
// instance. To choose an AMI for the instance, you must specify a valid AMI alias
// or a valid Amazon EC2 Systems Manager (SSM) path.
//
- // From December 04, 2023, you will be required to include the imageId parameter
- // for the CreateEnvironmentEC2 action. This change will be reflected across all
- // direct methods of communicating with the API, such as Amazon Web Services SDK,
- // Amazon Web Services CLI and Amazon Web Services CloudFormation. This change will
- // only affect direct API consumers, and not Cloud9 console users.
- //
// We recommend using Amazon Linux 2023 as the AMI to create your environment as
// it is fully supported.
//
+ // From December 16, 2024, Ubuntu 18.04 will be removed from the list of available
+ // imageIds for Cloud9. This change is necessary as Ubuntu 18.04 has ended standard
+ // support on May 31, 2023. This change will only affect direct API consumers, and
+ // not Cloud9 console users.
+ //
// Since Ubuntu 18.04 has ended standard support as of May 31, 2023, we recommend
// you choose Ubuntu 22.04.
//
diff --git a/service/cloud9/api_op_CreateEnvironmentMembership.go b/service/cloud9/api_op_CreateEnvironmentMembership.go
index 27adaf21651..704cba4569e 100644
--- a/service/cloud9/api_op_CreateEnvironmentMembership.go
+++ b/service/cloud9/api_op_CreateEnvironmentMembership.go
@@ -12,6 +12,11 @@ import (
)
// Adds an environment member to an Cloud9 development environment.
+//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
func (c *Client) CreateEnvironmentMembership(ctx context.Context, params *CreateEnvironmentMembershipInput, optFns ...func(*Options)) (*CreateEnvironmentMembershipOutput, error) {
if params == nil {
params = &CreateEnvironmentMembershipInput{}
diff --git a/service/cloud9/api_op_DeleteEnvironment.go b/service/cloud9/api_op_DeleteEnvironment.go
index 9462d35a9b3..af7f81cbc13 100644
--- a/service/cloud9/api_op_DeleteEnvironment.go
+++ b/service/cloud9/api_op_DeleteEnvironment.go
@@ -12,6 +12,11 @@ import (
// Deletes an Cloud9 development environment. If an Amazon EC2 instance is
// connected to the environment, also terminates the instance.
+//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
func (c *Client) DeleteEnvironment(ctx context.Context, params *DeleteEnvironmentInput, optFns ...func(*Options)) (*DeleteEnvironmentOutput, error) {
if params == nil {
params = &DeleteEnvironmentInput{}
diff --git a/service/cloud9/api_op_DeleteEnvironmentMembership.go b/service/cloud9/api_op_DeleteEnvironmentMembership.go
index 10b8e6e5f02..388289b5994 100644
--- a/service/cloud9/api_op_DeleteEnvironmentMembership.go
+++ b/service/cloud9/api_op_DeleteEnvironmentMembership.go
@@ -11,6 +11,11 @@ import (
)
// Deletes an environment member from a development environment.
+//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
func (c *Client) DeleteEnvironmentMembership(ctx context.Context, params *DeleteEnvironmentMembershipInput, optFns ...func(*Options)) (*DeleteEnvironmentMembershipOutput, error) {
if params == nil {
params = &DeleteEnvironmentMembershipInput{}
diff --git a/service/cloud9/api_op_DescribeEnvironmentMemberships.go b/service/cloud9/api_op_DescribeEnvironmentMemberships.go
index 6abe1ebe03c..20b95d073c9 100644
--- a/service/cloud9/api_op_DescribeEnvironmentMemberships.go
+++ b/service/cloud9/api_op_DescribeEnvironmentMemberships.go
@@ -13,6 +13,11 @@ import (
// Gets information about environment members for an Cloud9 development
// environment.
+//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
func (c *Client) DescribeEnvironmentMemberships(ctx context.Context, params *DescribeEnvironmentMembershipsInput, optFns ...func(*Options)) (*DescribeEnvironmentMembershipsOutput, error) {
if params == nil {
params = &DescribeEnvironmentMembershipsInput{}
diff --git a/service/cloud9/api_op_DescribeEnvironmentStatus.go b/service/cloud9/api_op_DescribeEnvironmentStatus.go
index 52b58034849..cf61e572426 100644
--- a/service/cloud9/api_op_DescribeEnvironmentStatus.go
+++ b/service/cloud9/api_op_DescribeEnvironmentStatus.go
@@ -12,6 +12,11 @@ import (
)
// Gets status information for an Cloud9 development environment.
+//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
func (c *Client) DescribeEnvironmentStatus(ctx context.Context, params *DescribeEnvironmentStatusInput, optFns ...func(*Options)) (*DescribeEnvironmentStatusOutput, error) {
if params == nil {
params = &DescribeEnvironmentStatusInput{}
diff --git a/service/cloud9/api_op_DescribeEnvironments.go b/service/cloud9/api_op_DescribeEnvironments.go
index 1cb77b25de0..5b029a6d8f2 100644
--- a/service/cloud9/api_op_DescribeEnvironments.go
+++ b/service/cloud9/api_op_DescribeEnvironments.go
@@ -12,6 +12,11 @@ import (
)
// Gets information about Cloud9 development environments.
+//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
func (c *Client) DescribeEnvironments(ctx context.Context, params *DescribeEnvironmentsInput, optFns ...func(*Options)) (*DescribeEnvironmentsOutput, error) {
if params == nil {
params = &DescribeEnvironmentsInput{}
diff --git a/service/cloud9/api_op_ListEnvironments.go b/service/cloud9/api_op_ListEnvironments.go
index 3a45062f6ef..e17f3c45ab6 100644
--- a/service/cloud9/api_op_ListEnvironments.go
+++ b/service/cloud9/api_op_ListEnvironments.go
@@ -11,6 +11,14 @@ import (
)
// Gets a list of Cloud9 development environment identifiers.
+//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
func (c *Client) ListEnvironments(ctx context.Context, params *ListEnvironmentsInput, optFns ...func(*Options)) (*ListEnvironmentsOutput, error) {
if params == nil {
params = &ListEnvironmentsInput{}
diff --git a/service/cloud9/api_op_ListTagsForResource.go b/service/cloud9/api_op_ListTagsForResource.go
index b293dcf4bb7..48e7fda81dd 100644
--- a/service/cloud9/api_op_ListTagsForResource.go
+++ b/service/cloud9/api_op_ListTagsForResource.go
@@ -12,6 +12,11 @@ import (
)
// Gets a list of the tags associated with an Cloud9 development environment.
+//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
func (c *Client) ListTagsForResource(ctx context.Context, params *ListTagsForResourceInput, optFns ...func(*Options)) (*ListTagsForResourceOutput, error) {
if params == nil {
params = &ListTagsForResourceInput{}
diff --git a/service/cloud9/api_op_TagResource.go b/service/cloud9/api_op_TagResource.go
index 491d7bc3ef6..79fe27aa1ba 100644
--- a/service/cloud9/api_op_TagResource.go
+++ b/service/cloud9/api_op_TagResource.go
@@ -13,8 +13,13 @@ import (
// Adds tags to an Cloud9 development environment.
//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
// Tags that you add to an Cloud9 environment by using this method will NOT be
// automatically propagated to underlying resources.
+//
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
func (c *Client) TagResource(ctx context.Context, params *TagResourceInput, optFns ...func(*Options)) (*TagResourceOutput, error) {
if params == nil {
params = &TagResourceInput{}
diff --git a/service/cloud9/api_op_UntagResource.go b/service/cloud9/api_op_UntagResource.go
index d90360bf9b6..8f8e62df63b 100644
--- a/service/cloud9/api_op_UntagResource.go
+++ b/service/cloud9/api_op_UntagResource.go
@@ -11,6 +11,11 @@ import (
)
// Removes tags from an Cloud9 development environment.
+//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
func (c *Client) UntagResource(ctx context.Context, params *UntagResourceInput, optFns ...func(*Options)) (*UntagResourceOutput, error) {
if params == nil {
params = &UntagResourceInput{}
diff --git a/service/cloud9/api_op_UpdateEnvironment.go b/service/cloud9/api_op_UpdateEnvironment.go
index 75109b0ff2a..a172c699954 100644
--- a/service/cloud9/api_op_UpdateEnvironment.go
+++ b/service/cloud9/api_op_UpdateEnvironment.go
@@ -12,6 +12,11 @@ import (
)
// Changes the settings of an existing Cloud9 development environment.
+//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
func (c *Client) UpdateEnvironment(ctx context.Context, params *UpdateEnvironmentInput, optFns ...func(*Options)) (*UpdateEnvironmentOutput, error) {
if params == nil {
params = &UpdateEnvironmentInput{}
diff --git a/service/cloud9/api_op_UpdateEnvironmentMembership.go b/service/cloud9/api_op_UpdateEnvironmentMembership.go
index 4c3f2f068cf..f987a981d40 100644
--- a/service/cloud9/api_op_UpdateEnvironmentMembership.go
+++ b/service/cloud9/api_op_UpdateEnvironmentMembership.go
@@ -13,6 +13,11 @@ import (
// Changes the settings of an existing environment member for an Cloud9
// development environment.
+//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
func (c *Client) UpdateEnvironmentMembership(ctx context.Context, params *UpdateEnvironmentMembershipInput, optFns ...func(*Options)) (*UpdateEnvironmentMembershipOutput, error) {
if params == nil {
params = &UpdateEnvironmentMembershipInput{}
diff --git a/service/cloud9/doc.go b/service/cloud9/doc.go
index 665ffac62f5..3f9fd6b7e0e 100644
--- a/service/cloud9/doc.go
+++ b/service/cloud9/doc.go
@@ -10,6 +10,9 @@
//
// For more information about Cloud9, see the [Cloud9 User Guide].
//
+// Cloud9 is no longer available to new customers. Existing customers of Cloud9
+// can continue to use the service as normal. [Learn more"]
+//
// Cloud9 supports these operations:
//
// - CreateEnvironmentEC2 : Creates an Cloud9 development environment, launches
@@ -44,4 +47,5 @@
// environment member for an environment.
//
// [Cloud9 User Guide]: https://docs.aws.amazon.com/cloud9/latest/user-guide
+// [Learn more"]: http://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/
package cloud9
diff --git a/service/cloud9/go_module_metadata.go b/service/cloud9/go_module_metadata.go
index ead5e95bd7d..a5cb46c9050 100644
--- a/service/cloud9/go_module_metadata.go
+++ b/service/cloud9/go_module_metadata.go
@@ -3,4 +3,4 @@
package cloud9
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.28.7"
+const goModuleVersion = "1.28.8"
diff --git a/service/cloudfront/CHANGELOG.md b/service/cloudfront/CHANGELOG.md
index 33c1156ff48..21f621376c1 100644
--- a/service/cloudfront/CHANGELOG.md
+++ b/service/cloudfront/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.44.0 (2024-12-17)
+
+* **Feature**: Adds support for OriginReadTimeout and OriginKeepaliveTimeout to create CloudFront Distributions with VPC Origins.
+
# v1.43.1 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/cloudfront/deserializers.go b/service/cloudfront/deserializers.go
index 44901b107c7..3a0984c7ccc 100644
--- a/service/cloudfront/deserializers.go
+++ b/service/cloudfront/deserializers.go
@@ -50745,6 +50745,40 @@ func awsRestxml_deserializeDocumentVpcOriginConfig(v **types.VpcOriginConfig, de
originalDecoder := decoder
decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
switch {
+ case strings.EqualFold("OriginKeepaliveTimeout", t.Name.Local):
+ val, err := decoder.Value()
+ if err != nil {
+ return err
+ }
+ if val == nil {
+ break
+ }
+ {
+ xtv := string(val)
+ i64, err := strconv.ParseInt(xtv, 10, 64)
+ if err != nil {
+ return err
+ }
+ sv.OriginKeepaliveTimeout = ptr.Int32(int32(i64))
+ }
+
+ case strings.EqualFold("OriginReadTimeout", t.Name.Local):
+ val, err := decoder.Value()
+ if err != nil {
+ return err
+ }
+ if val == nil {
+ break
+ }
+ {
+ xtv := string(val)
+ i64, err := strconv.ParseInt(xtv, 10, 64)
+ if err != nil {
+ return err
+ }
+ sv.OriginReadTimeout = ptr.Int32(int32(i64))
+ }
+
case strings.EqualFold("VpcOriginId", t.Name.Local):
val, err := decoder.Value()
if err != nil {
diff --git a/service/cloudfront/go_module_metadata.go b/service/cloudfront/go_module_metadata.go
index 7b4a342c302..ba04f8ae28b 100644
--- a/service/cloudfront/go_module_metadata.go
+++ b/service/cloudfront/go_module_metadata.go
@@ -3,4 +3,4 @@
package cloudfront
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.43.1"
+const goModuleVersion = "1.44.0"
diff --git a/service/cloudfront/serializers.go b/service/cloudfront/serializers.go
index 947ff5a1c6a..56982d21194 100644
--- a/service/cloudfront/serializers.go
+++ b/service/cloudfront/serializers.go
@@ -15349,6 +15349,28 @@ func awsRestxml_serializeDocumentViewerCertificate(v *types.ViewerCertificate, v
func awsRestxml_serializeDocumentVpcOriginConfig(v *types.VpcOriginConfig, value smithyxml.Value) error {
defer value.Close()
+ if v.OriginKeepaliveTimeout != nil {
+ rootAttr := []smithyxml.Attr{}
+ root := smithyxml.StartElement{
+ Name: smithyxml.Name{
+ Local: "OriginKeepaliveTimeout",
+ },
+ Attr: rootAttr,
+ }
+ el := value.MemberElement(root)
+ el.Integer(*v.OriginKeepaliveTimeout)
+ }
+ if v.OriginReadTimeout != nil {
+ rootAttr := []smithyxml.Attr{}
+ root := smithyxml.StartElement{
+ Name: smithyxml.Name{
+ Local: "OriginReadTimeout",
+ },
+ Attr: rootAttr,
+ }
+ el := value.MemberElement(root)
+ el.Integer(*v.OriginReadTimeout)
+ }
if v.VpcOriginId != nil {
rootAttr := []smithyxml.Attr{}
root := smithyxml.StartElement{
diff --git a/service/cloudfront/types/types.go b/service/cloudfront/types/types.go
index 98116b0d92a..9262c6eb29f 100644
--- a/service/cloudfront/types/types.go
+++ b/service/cloudfront/types/types.go
@@ -1424,9 +1424,9 @@ type CustomOriginConfig struct {
// origin. The minimum timeout is 1 second, the maximum is 60 seconds, and the
// default (if you don't specify otherwise) is 5 seconds.
//
- // For more information, see [Origin Keep-alive Timeout] in the Amazon CloudFront Developer Guide.
+ // For more information, see [Keep-alive timeout (custom origins only)] in the Amazon CloudFront Developer Guide.
//
- // [Origin Keep-alive Timeout]: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesOriginKeepaliveTimeout
+ // [Keep-alive timeout (custom origins only)]: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesOriginKeepaliveTimeout
OriginKeepaliveTimeout *int32
// Specifies how long, in seconds, CloudFront waits for a response from the
@@ -1434,9 +1434,9 @@ type CustomOriginConfig struct {
// is 1 second, the maximum is 60 seconds, and the default (if you don't specify
// otherwise) is 30 seconds.
//
- // For more information, see [Origin Response Timeout] in the Amazon CloudFront Developer Guide.
+ // For more information, see [Response timeout (custom origins only)] in the Amazon CloudFront Developer Guide.
//
- // [Origin Response Timeout]: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesOriginResponseTimeout
+ // [Response timeout (custom origins only)]: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesOriginResponseTimeout
OriginReadTimeout *int32
// Specifies the minimum SSL/TLS protocol that CloudFront uses when connecting to
@@ -1816,14 +1816,18 @@ type DistributionConfig struct {
// [Customizing Error Responses]: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html
CustomErrorResponses *CustomErrorResponses
- // The object that you want CloudFront to request from your origin (for example,
- // index.html ) when a viewer requests the root URL for your distribution (
- // https://www.example.com ) instead of an object in your distribution (
- // https://www.example.com/product-description.html ). Specifying a default root
- // object avoids exposing the contents of your distribution.
+ // When a viewer requests the root URL for your distribution, the default root
+ // object is the object that you want CloudFront to request from your origin. For
+ // example, if your root URL is https://www.example.com , you can specify
+ // CloudFront to return the index.html file as the default root object. You can
+ // specify a default root object so that viewers see a specific file or object,
+ // instead of another object in your distribution (for example,
+ // https://www.example.com/product-description.html ). A default root object avoids
+ // exposing the contents of your distribution.
//
- // Specify only the object name, for example, index.html . Don't add a / before
- // the object name.
+ // You can specify the object name or a path to the object name (for example,
+ // index.html or exampleFolderName/index.html ). Your string can't begin with a
+ // forward slash ( / ). Only specify the object name or the path to the object.
//
// If you don't want to specify a default root object when you create a
// distribution, include an empty DefaultRootObject element.
@@ -1834,10 +1838,10 @@ type DistributionConfig struct {
// To replace the default root object, update the distribution configuration and
// specify the new object.
//
- // For more information about the default root object, see [Creating a Default Root Object] in the Amazon
+ // For more information about the default root object, see [Specify a default root object] in the Amazon
// CloudFront Developer Guide.
//
- // [Creating a Default Root Object]: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DefaultRootObject.html
+ // [Specify a default root object]: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DefaultRootObject.html
DefaultRootObject *string
// (Optional) Specify the HTTP version(s) that you want viewers to use to
@@ -5808,6 +5812,25 @@ type VpcOriginConfig struct {
// This member is required.
VpcOriginId *string
+ // Specifies how long, in seconds, CloudFront persists its connection to the
+ // origin. The minimum timeout is 1 second, the maximum is 60 seconds, and the
+ // default (if you don't specify otherwise) is 5 seconds.
+ //
+ // For more information, see [Keep-alive timeout (custom origins only)] in the Amazon CloudFront Developer Guide.
+ //
+ // [Keep-alive timeout (custom origins only)]: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesOriginKeepaliveTimeout
+ OriginKeepaliveTimeout *int32
+
+ // Specifies how long, in seconds, CloudFront waits for a response from the
+ // origin. This is also known as the origin response timeout. The minimum timeout
+ // is 1 second, the maximum is 60 seconds, and the default (if you don't specify
+ // otherwise) is 30 seconds.
+ //
+ // For more information, see [Response timeout (custom origins only)] in the Amazon CloudFront Developer Guide.
+ //
+ // [Response timeout (custom origins only)]: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesOriginResponseTimeout
+ OriginReadTimeout *int32
+
noSmithyDocumentSerde
}
diff --git a/service/cloudhsmv2/CHANGELOG.md b/service/cloudhsmv2/CHANGELOG.md
index 3215d55117f..9e00c71d047 100644
--- a/service/cloudhsmv2/CHANGELOG.md
+++ b/service/cloudhsmv2/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.28.0 (2024-12-13)
+
+* **Feature**: Add support for Dual-Stack hsm2m.medium clusters. The customers will now be able to create hsm2m.medium clusters having both IPv4 and IPv6 connection capabilities by specifying a new param called NetworkType=DUALSTACK during cluster creation.
+
# v1.27.8 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/cloudhsmv2/api_op_CreateCluster.go b/service/cloudhsmv2/api_op_CreateCluster.go
index d016fb19bf9..badafbc8708 100644
--- a/service/cloudhsmv2/api_op_CreateCluster.go
+++ b/service/cloudhsmv2/api_op_CreateCluster.go
@@ -56,6 +56,10 @@ type CreateClusterInput struct {
// The mode to use in the cluster. The allowed values are FIPS and NON_FIPS .
Mode types.ClusterMode
+ // The NetworkType to create a cluster with. The allowed values are IPV4 and
+ // DUALSTACK .
+ NetworkType types.NetworkType
+
// The identifier (ID) or the Amazon Resource Name (ARN) of the cluster backup to
// restore. Use this value to restore the cluster from a backup instead of creating
// a new cluster. To find the backup ID or ARN, use DescribeBackups. If using a backup in another
diff --git a/service/cloudhsmv2/deserializers.go b/service/cloudhsmv2/deserializers.go
index 7799e984bc0..5c394399202 100644
--- a/service/cloudhsmv2/deserializers.go
+++ b/service/cloudhsmv2/deserializers.go
@@ -2119,6 +2119,9 @@ func awsAwsjson11_deserializeOpErrorTagResource(response *smithyhttp.Response, m
case strings.EqualFold("CloudHsmInvalidRequestException", errorCode):
return awsAwsjson11_deserializeErrorCloudHsmInvalidRequestException(response, errorBody)
+ case strings.EqualFold("CloudHsmResourceLimitExceededException", errorCode):
+ return awsAwsjson11_deserializeErrorCloudHsmResourceLimitExceededException(response, errorBody)
+
case strings.EqualFold("CloudHsmResourceNotFoundException", errorCode):
return awsAwsjson11_deserializeErrorCloudHsmResourceNotFoundException(response, errorBody)
@@ -2369,6 +2372,41 @@ func awsAwsjson11_deserializeErrorCloudHsmInvalidRequestException(response *smit
return output
}
+func awsAwsjson11_deserializeErrorCloudHsmResourceLimitExceededException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ output := &types.CloudHsmResourceLimitExceededException{}
+ err := awsAwsjson11_deserializeDocumentCloudHsmResourceLimitExceededException(&output, shape)
+
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ return output
+}
+
func awsAwsjson11_deserializeErrorCloudHsmResourceNotFoundException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
var buff [1024]byte
ringBuffer := smithyio.NewRingBuffer(buff[:])
@@ -2927,6 +2965,46 @@ func awsAwsjson11_deserializeDocumentCloudHsmInvalidRequestException(v **types.C
return nil
}
+func awsAwsjson11_deserializeDocumentCloudHsmResourceLimitExceededException(v **types.CloudHsmResourceLimitExceededException, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.CloudHsmResourceLimitExceededException
+ if *v == nil {
+ sv = &types.CloudHsmResourceLimitExceededException{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "message", "Message":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected errorMessage to be of type string, got %T instead", value)
+ }
+ sv.Message = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsAwsjson11_deserializeDocumentCloudHsmResourceNotFoundException(v **types.CloudHsmResourceNotFoundException, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -3136,6 +3214,15 @@ func awsAwsjson11_deserializeDocumentCluster(v **types.Cluster, value interface{
sv.Mode = types.ClusterMode(jtv)
}
+ case "NetworkType":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected NetworkType to be of type string, got %T instead", value)
+ }
+ sv.NetworkType = types.NetworkType(jtv)
+ }
+
case "PreCoPassword":
if value != nil {
jtv, ok := value.(string)
@@ -3411,6 +3498,15 @@ func awsAwsjson11_deserializeDocumentHsm(v **types.Hsm, value interface{}) error
sv.EniIp = ptr.String(jtv)
}
+ case "EniIpV6":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected IpV6Address to be of type string, got %T instead", value)
+ }
+ sv.EniIpV6 = ptr.String(jtv)
+ }
+
case "HsmId":
if value != nil {
jtv, ok := value.(string)
diff --git a/service/cloudhsmv2/go_module_metadata.go b/service/cloudhsmv2/go_module_metadata.go
index a0604848452..15449a47ef6 100644
--- a/service/cloudhsmv2/go_module_metadata.go
+++ b/service/cloudhsmv2/go_module_metadata.go
@@ -3,4 +3,4 @@
package cloudhsmv2
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.27.8"
+const goModuleVersion = "1.28.0"
diff --git a/service/cloudhsmv2/serializers.go b/service/cloudhsmv2/serializers.go
index 5d6b74fc92f..c11abbd1586 100644
--- a/service/cloudhsmv2/serializers.go
+++ b/service/cloudhsmv2/serializers.go
@@ -1254,6 +1254,11 @@ func awsAwsjson11_serializeOpDocumentCreateClusterInput(v *CreateClusterInput, v
ok.String(string(v.Mode))
}
+ if len(v.NetworkType) > 0 {
+ ok := object.Key("NetworkType")
+ ok.String(string(v.NetworkType))
+ }
+
if v.SourceBackupId != nil {
ok := object.Key("SourceBackupId")
ok.String(*v.SourceBackupId)
diff --git a/service/cloudhsmv2/types/enums.go b/service/cloudhsmv2/types/enums.go
index f822474b9f5..63dbcdd0b1d 100644
--- a/service/cloudhsmv2/types/enums.go
+++ b/service/cloudhsmv2/types/enums.go
@@ -88,6 +88,8 @@ const (
ClusterStateInitialized ClusterState = "INITIALIZED"
ClusterStateActive ClusterState = "ACTIVE"
ClusterStateUpdateInProgress ClusterState = "UPDATE_IN_PROGRESS"
+ ClusterStateModifyInProgress ClusterState = "MODIFY_IN_PROGRESS"
+ ClusterStateRollbackInProgress ClusterState = "ROLLBACK_IN_PROGRESS"
ClusterStateDeleteInProgress ClusterState = "DELETE_IN_PROGRESS"
ClusterStateDeleted ClusterState = "DELETED"
ClusterStateDegraded ClusterState = "DEGRADED"
@@ -105,6 +107,8 @@ func (ClusterState) Values() []ClusterState {
"INITIALIZED",
"ACTIVE",
"UPDATE_IN_PROGRESS",
+ "MODIFY_IN_PROGRESS",
+ "ROLLBACK_IN_PROGRESS",
"DELETE_IN_PROGRESS",
"DELETED",
"DEGRADED",
@@ -135,3 +139,22 @@ func (HsmState) Values() []HsmState {
"DELETED",
}
}
+
+type NetworkType string
+
+// Enum values for NetworkType
+const (
+ NetworkTypeIpv4 NetworkType = "IPV4"
+ NetworkTypeDualstack NetworkType = "DUALSTACK"
+)
+
+// Values returns all known values for NetworkType. Note that this can be expanded
+// in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (NetworkType) Values() []NetworkType {
+ return []NetworkType{
+ "IPV4",
+ "DUALSTACK",
+ }
+}
diff --git a/service/cloudhsmv2/types/errors.go b/service/cloudhsmv2/types/errors.go
index 8db9046db82..b3c26426876 100644
--- a/service/cloudhsmv2/types/errors.go
+++ b/service/cloudhsmv2/types/errors.go
@@ -87,6 +87,34 @@ func (e *CloudHsmInvalidRequestException) ErrorCode() string {
}
func (e *CloudHsmInvalidRequestException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+// The request was rejected because it exceeds an CloudHSM limit.
+type CloudHsmResourceLimitExceededException struct {
+ Message *string
+
+ ErrorCodeOverride *string
+
+ noSmithyDocumentSerde
+}
+
+func (e *CloudHsmResourceLimitExceededException) Error() string {
+ return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *CloudHsmResourceLimitExceededException) ErrorMessage() string {
+ if e.Message == nil {
+ return ""
+ }
+ return *e.Message
+}
+func (e *CloudHsmResourceLimitExceededException) ErrorCode() string {
+ if e == nil || e.ErrorCodeOverride == nil {
+ return "CloudHsmResourceLimitExceededException"
+ }
+ return *e.ErrorCodeOverride
+}
+func (e *CloudHsmResourceLimitExceededException) ErrorFault() smithy.ErrorFault {
+ return smithy.FaultClient
+}
+
// The request was rejected because it refers to a resource that cannot be found.
type CloudHsmResourceNotFoundException struct {
Message *string
diff --git a/service/cloudhsmv2/types/types.go b/service/cloudhsmv2/types/types.go
index 388e7ee96c0..2fcd862371e 100644
--- a/service/cloudhsmv2/types/types.go
+++ b/service/cloudhsmv2/types/types.go
@@ -129,6 +129,19 @@ type Cluster struct {
// The mode of the cluster.
Mode ClusterMode
+ // The cluster's NetworkType can be set to either IPV4 (which is the default) or
+ // DUALSTACK. When set to IPV4, communication between your application and the
+ // Hardware Security Modules (HSMs) is restricted to the IPv4 protocol only. In
+ // contrast, the DUALSTACK network type enables communication over both the IPv4
+ // and IPv6 protocols. To use the DUALSTACK option, you'll need to configure your
+ // Virtual Private Cloud (VPC) and subnets to support both IPv4 and IPv6. This
+ // involves adding IPv6 Classless Inter-Domain Routing (CIDR) blocks to the
+ // existing IPv4 CIDR blocks in your subnets. The choice between IPV4 and DUALSTACK
+ // network types determines the flexibility of the network addressing setup for
+ // your cluster. The DUALSTACK option provides more flexibility by allowing both
+ // IPv4 and IPv6 communication.
+ NetworkType NetworkType
+
// The default password for the cluster's Pre-Crypto Officer (PRECO) user.
PreCoPassword *string
@@ -200,6 +213,9 @@ type Hsm struct {
// The IP address of the HSM's elastic network interface (ENI).
EniIp *string
+ // The IPv6 address (if any) of the HSM's elastic network interface (ENI).
+ EniIpV6 *string
+
// The HSM's state.
State HsmState
diff --git a/service/codebuild/CHANGELOG.md b/service/codebuild/CHANGELOG.md
index 9a9880222b5..6296767572b 100644
--- a/service/codebuild/CHANGELOG.md
+++ b/service/codebuild/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.49.3 (2024-12-13)
+
+* No change notes available for this release.
+
# v1.49.2 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/codebuild/go_module_metadata.go b/service/codebuild/go_module_metadata.go
index 6ffa1f47ce9..e85b7c381ac 100644
--- a/service/codebuild/go_module_metadata.go
+++ b/service/codebuild/go_module_metadata.go
@@ -3,4 +3,4 @@
package codebuild
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.49.2"
+const goModuleVersion = "1.49.3"
diff --git a/service/codebuild/internal/endpoints/endpoints.go b/service/codebuild/internal/endpoints/endpoints.go
index d89ef6eebb2..2da508377c6 100644
--- a/service/codebuild/internal/endpoints/endpoints.go
+++ b/service/codebuild/internal/endpoints/endpoints.go
@@ -348,6 +348,14 @@ var defaultPartitions = endpoints.Partitions{
},
RegionRegex: partitionRegexp.AwsIso,
IsRegionalized: true,
+ Endpoints: endpoints.Endpoints{
+ endpoints.EndpointKey{
+ Region: "us-iso-east-1",
+ }: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "us-iso-west-1",
+ }: endpoints.Endpoint{},
+ },
},
{
ID: "aws-iso-b",
diff --git a/service/codepipeline/CHANGELOG.md b/service/codepipeline/CHANGELOG.md
index be561f83f14..62c209ef9c5 100644
--- a/service/codepipeline/CHANGELOG.md
+++ b/service/codepipeline/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.38.0 (2024-12-17)
+
+* **Feature**: AWS CodePipeline V2 type pipelines now support Managed Compute Rule.
+
# v1.37.1 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/codepipeline/api_op_GetActionType.go b/service/codepipeline/api_op_GetActionType.go
index 99ce13d7493..d0873f73f34 100644
--- a/service/codepipeline/api_op_GetActionType.go
+++ b/service/codepipeline/api_op_GetActionType.go
@@ -46,6 +46,8 @@ type GetActionTypeInput struct {
//
// - Invoke
//
+ // - Compute
+ //
// This member is required.
Category types.ActionCategory
diff --git a/service/codepipeline/api_op_ListRuleTypes.go b/service/codepipeline/api_op_ListRuleTypes.go
index a928e558e2d..19878bd2439 100644
--- a/service/codepipeline/api_op_ListRuleTypes.go
+++ b/service/codepipeline/api_op_ListRuleTypes.go
@@ -11,7 +11,11 @@ import (
smithyhttp "github.com/aws/smithy-go/transport/http"
)
-// Lists the rules for the condition.
+// Lists the rules for the condition. For more information about conditions, see [Stage conditions].
+// For more information about rules, see the [CodePipeline rule reference].
+//
+// [Stage conditions]: https://docs.aws.amazon.com/codepipeline/latest/userguide/stage-conditions.html
+// [CodePipeline rule reference]: https://docs.aws.amazon.com/codepipeline/latest/userguide/rule-reference.html
func (c *Client) ListRuleTypes(ctx context.Context, params *ListRuleTypesInput, optFns ...func(*Options)) (*ListRuleTypesOutput, error) {
if params == nil {
params = &ListRuleTypesInput{}
diff --git a/service/codepipeline/deserializers.go b/service/codepipeline/deserializers.go
index 20e58a55835..0c8778051ea 100644
--- a/service/codepipeline/deserializers.go
+++ b/service/codepipeline/deserializers.go
@@ -13203,6 +13203,11 @@ func awsAwsjson11_deserializeDocumentRuleDeclaration(v **types.RuleDeclaration,
for key, value := range shape {
switch key {
+ case "commands":
+ if err := awsAwsjson11_deserializeDocumentCommandList(&sv.Commands, value); err != nil {
+ return err
+ }
+
case "configuration":
if err := awsAwsjson11_deserializeDocumentRuleConfigurationMap(&sv.Configuration, value); err != nil {
return err
diff --git a/service/codepipeline/go_module_metadata.go b/service/codepipeline/go_module_metadata.go
index 83e8592133d..e6bbd48c9e9 100644
--- a/service/codepipeline/go_module_metadata.go
+++ b/service/codepipeline/go_module_metadata.go
@@ -3,4 +3,4 @@
package codepipeline
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.37.1"
+const goModuleVersion = "1.38.0"
diff --git a/service/codepipeline/serializers.go b/service/codepipeline/serializers.go
index 9e9b6d6cd0f..5a1e2c2b02a 100644
--- a/service/codepipeline/serializers.go
+++ b/service/codepipeline/serializers.go
@@ -4002,6 +4002,13 @@ func awsAwsjson11_serializeDocumentRuleDeclaration(v *types.RuleDeclaration, val
object := value.Object()
defer object.Close()
+ if v.Commands != nil {
+ ok := object.Key("commands")
+ if err := awsAwsjson11_serializeDocumentCommandList(v.Commands, ok); err != nil {
+ return err
+ }
+ }
+
if v.Configuration != nil {
ok := object.Key("configuration")
if err := awsAwsjson11_serializeDocumentRuleConfigurationMap(v.Configuration, ok); err != nil {
diff --git a/service/codepipeline/types/types.go b/service/codepipeline/types/types.go
index 5326a6f8070..2540869dda4 100644
--- a/service/codepipeline/types/types.go
+++ b/service/codepipeline/types/types.go
@@ -520,6 +520,8 @@ type ActionTypeId struct {
//
// - Approval
//
+ // - Compute
+ //
// This member is required.
Category ActionCategory
@@ -878,7 +880,11 @@ type BlockerDeclaration struct {
}
// The condition for the stage. A condition is made up of the rules and the result
-// for the condition.
+// for the condition. For more information about conditions, see [Stage conditions]. For more
+// information about rules, see the [CodePipeline rule reference].
+//
+// [Stage conditions]: https://docs.aws.amazon.com/codepipeline/latest/userguide/stage-conditions.html
+// [CodePipeline rule reference]: https://docs.aws.amazon.com/codepipeline/latest/userguide/rule-reference.html
type Condition struct {
// The action to be done when the condition is met. For example, rolling back an
@@ -1847,10 +1853,14 @@ type RuleConfigurationProperty struct {
// Represents information about the rule to be created for an associated
// condition. An example would be creating a new rule for an entry condition, such
// as a rule that checks for a test result before allowing the run to enter the
-// deployment stage.
+// deployment stage. For more information about conditions, see [Stage conditions]. For more
+// information about rules, see the [CodePipeline rule reference].
+//
+// [Stage conditions]: https://docs.aws.amazon.com/codepipeline/latest/userguide/stage-conditions.html
+// [CodePipeline rule reference]: https://docs.aws.amazon.com/codepipeline/latest/userguide/rule-reference.html
type RuleDeclaration struct {
- // The name of the rule that is created for the condition, such as CheckAllResults.
+ // The name of the rule that is created for the condition, such as VariableCheck .
//
// This member is required.
Name *string
@@ -1861,6 +1871,13 @@ type RuleDeclaration struct {
// This member is required.
RuleTypeId *RuleTypeId
+ // The shell commands to run with your commands rule in CodePipeline. All commands
+ // are supported except multi-line formats. While CodeBuild logs and permissions
+ // are used, you do not need to create any resources in CodeBuild.
+ //
+ // Using compute time for this action will incur separate charges in CodeBuild.
+ Commands []string
+
// The action configuration fields for the rule.
Configuration map[string]string
diff --git a/service/connect/CHANGELOG.md b/service/connect/CHANGELOG.md
index 8283f6d0646..d2879609d0b 100644
--- a/service/connect/CHANGELOG.md
+++ b/service/connect/CHANGELOG.md
@@ -1,3 +1,11 @@
+# v1.122.0 (2024-12-18)
+
+* **Feature**: This release adds support for the UpdateParticipantAuthentication API used for customer authentication within Amazon Connect chats.
+
+# v1.121.0 (2024-12-12)
+
+* **Feature**: Configure holidays and other overrides to hours of operation in advance. During contact handling, Amazon Connect automatically checks for overrides and provides customers with an appropriate flow path. After an override period passes call center automatically reverts to standard hours of operation.
+
# v1.120.0 (2024-12-10)
* **Feature**: Add support for Push Notifications for Amazon Connect chat. With Push Notifications enabled an alert could be sent to customers about new messages even when they aren't actively using the mobile application.
diff --git a/service/connect/api_op_CreateHoursOfOperationOverride.go b/service/connect/api_op_CreateHoursOfOperationOverride.go
new file mode 100644
index 00000000000..009b6eb52a5
--- /dev/null
+++ b/service/connect/api_op_CreateHoursOfOperationOverride.go
@@ -0,0 +1,187 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package connect
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/connect/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Creates an hours of operation override in an Amazon Connect hours of operation
+// resource
+func (c *Client) CreateHoursOfOperationOverride(ctx context.Context, params *CreateHoursOfOperationOverrideInput, optFns ...func(*Options)) (*CreateHoursOfOperationOverrideOutput, error) {
+ if params == nil {
+ params = &CreateHoursOfOperationOverrideInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "CreateHoursOfOperationOverride", params, optFns, c.addOperationCreateHoursOfOperationOverrideMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*CreateHoursOfOperationOverrideOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type CreateHoursOfOperationOverrideInput struct {
+
+ // Configuration information for the hours of operation override: day, start time,
+ // and end time.
+ //
+ // This member is required.
+ Config []types.HoursOfOperationOverrideConfig
+
+ // The date from when the hours of operation override would be effective.
+ //
+ // This member is required.
+ EffectiveFrom *string
+
+ // The date until when the hours of operation override would be effective.
+ //
+ // This member is required.
+ EffectiveTill *string
+
+ // The identifier for the hours of operation
+ //
+ // This member is required.
+ HoursOfOperationId *string
+
+ // The identifier of the Amazon Connect instance.
+ //
+ // This member is required.
+ InstanceId *string
+
+ // The name of the hours of operation override.
+ //
+ // This member is required.
+ Name *string
+
+ // The description of the hours of operation override.
+ Description *string
+
+ noSmithyDocumentSerde
+}
+
+type CreateHoursOfOperationOverrideOutput struct {
+
+ // The identifier for the hours of operation override.
+ HoursOfOperationOverrideId *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationCreateHoursOfOperationOverrideMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpCreateHoursOfOperationOverride{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpCreateHoursOfOperationOverride{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "CreateHoursOfOperationOverride"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpCreateHoursOfOperationOverrideValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opCreateHoursOfOperationOverride(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opCreateHoursOfOperationOverride(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "CreateHoursOfOperationOverride",
+ }
+}
diff --git a/service/connect/api_op_CreatePushNotificationRegistration.go b/service/connect/api_op_CreatePushNotificationRegistration.go
index 295703fe44d..fd567c7c84d 100644
--- a/service/connect/api_op_CreatePushNotificationRegistration.go
+++ b/service/connect/api_op_CreatePushNotificationRegistration.go
@@ -15,7 +15,7 @@ import (
// push notifications. For more information about push notifications, see [Set up push notifications in Amazon Connect for mobile chat]in the
// Amazon Connect Administrator Guide.
//
-// [Set up push notifications in Amazon Connect for mobile chat]: https://docs.aws.amazon.com/connect/latest/adminguide/set-up-push-notifications-for-mobile-chat.html
+// [Set up push notifications in Amazon Connect for mobile chat]: https://docs.aws.amazon.com/connect/latest/adminguide/enable-push-notifications-for-mobile-chat.html
func (c *Client) CreatePushNotificationRegistration(ctx context.Context, params *CreatePushNotificationRegistrationInput, optFns ...func(*Options)) (*CreatePushNotificationRegistrationOutput, error) {
if params == nil {
params = &CreatePushNotificationRegistrationInput{}
diff --git a/service/connect/api_op_CreateQueue.go b/service/connect/api_op_CreateQueue.go
index 1864400edfd..5bc29237ca9 100644
--- a/service/connect/api_op_CreateQueue.go
+++ b/service/connect/api_op_CreateQueue.go
@@ -11,8 +11,6 @@ import (
smithyhttp "github.com/aws/smithy-go/transport/http"
)
-// This API is in preview release for Amazon Connect and is subject to change.
-//
// Creates a new queue for the specified Amazon Connect instance.
//
// - If the phone number is claimed to a traffic distribution group that was
diff --git a/service/connect/api_op_DeleteHoursOfOperationOverride.go b/service/connect/api_op_DeleteHoursOfOperationOverride.go
new file mode 100644
index 00000000000..1c532cc2ce6
--- /dev/null
+++ b/service/connect/api_op_DeleteHoursOfOperationOverride.go
@@ -0,0 +1,163 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package connect
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Deletes an hours of operation override in an Amazon Connect hours of operation
+// resource
+func (c *Client) DeleteHoursOfOperationOverride(ctx context.Context, params *DeleteHoursOfOperationOverrideInput, optFns ...func(*Options)) (*DeleteHoursOfOperationOverrideOutput, error) {
+ if params == nil {
+ params = &DeleteHoursOfOperationOverrideInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "DeleteHoursOfOperationOverride", params, optFns, c.addOperationDeleteHoursOfOperationOverrideMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*DeleteHoursOfOperationOverrideOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type DeleteHoursOfOperationOverrideInput struct {
+
+ // The identifier for the hours of operation.
+ //
+ // This member is required.
+ HoursOfOperationId *string
+
+ // The identifier for the hours of operation override.
+ //
+ // This member is required.
+ HoursOfOperationOverrideId *string
+
+ // The identifier of the Amazon Connect instance.
+ //
+ // This member is required.
+ InstanceId *string
+
+ noSmithyDocumentSerde
+}
+
+type DeleteHoursOfOperationOverrideOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationDeleteHoursOfOperationOverrideMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpDeleteHoursOfOperationOverride{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpDeleteHoursOfOperationOverride{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "DeleteHoursOfOperationOverride"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpDeleteHoursOfOperationOverrideValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opDeleteHoursOfOperationOverride(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opDeleteHoursOfOperationOverride(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "DeleteHoursOfOperationOverride",
+ }
+}
diff --git a/service/connect/api_op_DescribeHoursOfOperationOverride.go b/service/connect/api_op_DescribeHoursOfOperationOverride.go
new file mode 100644
index 00000000000..910758694a0
--- /dev/null
+++ b/service/connect/api_op_DescribeHoursOfOperationOverride.go
@@ -0,0 +1,167 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package connect
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/connect/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Describes the hours of operation override.
+func (c *Client) DescribeHoursOfOperationOverride(ctx context.Context, params *DescribeHoursOfOperationOverrideInput, optFns ...func(*Options)) (*DescribeHoursOfOperationOverrideOutput, error) {
+ if params == nil {
+ params = &DescribeHoursOfOperationOverrideInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "DescribeHoursOfOperationOverride", params, optFns, c.addOperationDescribeHoursOfOperationOverrideMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*DescribeHoursOfOperationOverrideOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type DescribeHoursOfOperationOverrideInput struct {
+
+ // The identifier for the hours of operation.
+ //
+ // This member is required.
+ HoursOfOperationId *string
+
+ // The identifier for the hours of operation override.
+ //
+ // This member is required.
+ HoursOfOperationOverrideId *string
+
+ // The identifier of the Amazon Connect instance.
+ //
+ // This member is required.
+ InstanceId *string
+
+ noSmithyDocumentSerde
+}
+
+type DescribeHoursOfOperationOverrideOutput struct {
+
+ // Information about the hours of operations override.
+ HoursOfOperationOverride *types.HoursOfOperationOverride
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationDescribeHoursOfOperationOverrideMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpDescribeHoursOfOperationOverride{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpDescribeHoursOfOperationOverride{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "DescribeHoursOfOperationOverride"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpDescribeHoursOfOperationOverrideValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opDescribeHoursOfOperationOverride(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opDescribeHoursOfOperationOverride(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "DescribeHoursOfOperationOverride",
+ }
+}
diff --git a/service/connect/api_op_GetEffectiveHoursOfOperations.go b/service/connect/api_op_GetEffectiveHoursOfOperations.go
new file mode 100644
index 00000000000..a8778c592ef
--- /dev/null
+++ b/service/connect/api_op_GetEffectiveHoursOfOperations.go
@@ -0,0 +1,175 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package connect
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/connect/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Get the hours of operations with the effective override applied.
+func (c *Client) GetEffectiveHoursOfOperations(ctx context.Context, params *GetEffectiveHoursOfOperationsInput, optFns ...func(*Options)) (*GetEffectiveHoursOfOperationsOutput, error) {
+ if params == nil {
+ params = &GetEffectiveHoursOfOperationsInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "GetEffectiveHoursOfOperations", params, optFns, c.addOperationGetEffectiveHoursOfOperationsMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*GetEffectiveHoursOfOperationsOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type GetEffectiveHoursOfOperationsInput struct {
+
+ // The Date from when the hours of operation are listed.
+ //
+ // This member is required.
+ FromDate *string
+
+ // The identifier for the hours of operation.
+ //
+ // This member is required.
+ HoursOfOperationId *string
+
+ // The identifier of the Amazon Connect instance.
+ //
+ // This member is required.
+ InstanceId *string
+
+ // The Date until when the hours of operation are listed.
+ //
+ // This member is required.
+ ToDate *string
+
+ noSmithyDocumentSerde
+}
+
+type GetEffectiveHoursOfOperationsOutput struct {
+
+ // Information about the effective hours of operations
+ EffectiveHoursOfOperationList []types.EffectiveHoursOfOperations
+
+ // The time zone for the hours of operation.
+ TimeZone *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationGetEffectiveHoursOfOperationsMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpGetEffectiveHoursOfOperations{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpGetEffectiveHoursOfOperations{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "GetEffectiveHoursOfOperations"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpGetEffectiveHoursOfOperationsValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opGetEffectiveHoursOfOperations(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opGetEffectiveHoursOfOperations(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "GetEffectiveHoursOfOperations",
+ }
+}
diff --git a/service/connect/api_op_ListBots.go b/service/connect/api_op_ListBots.go
index d6f9a223c28..0e3318541fc 100644
--- a/service/connect/api_op_ListBots.go
+++ b/service/connect/api_op_ListBots.go
@@ -14,7 +14,7 @@ import (
// This API is in preview release for Amazon Connect and is subject to change.
//
// For the specified version of Amazon Lex, returns a paginated list of all the
-// Amazon Lex bots currently associated with the instance. Use this API to returns
+// Amazon Lex bots currently associated with the instance. Use this API to return
// both Amazon Lex V1 and V2 bots.
func (c *Client) ListBots(ctx context.Context, params *ListBotsInput, optFns ...func(*Options)) (*ListBotsOutput, error) {
if params == nil {
diff --git a/service/connect/api_op_ListHoursOfOperationOverrides.go b/service/connect/api_op_ListHoursOfOperationOverrides.go
new file mode 100644
index 00000000000..661114fff4b
--- /dev/null
+++ b/service/connect/api_op_ListHoursOfOperationOverrides.go
@@ -0,0 +1,278 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package connect
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/connect/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+ "time"
+)
+
+// List the hours of operation overrides.
+func (c *Client) ListHoursOfOperationOverrides(ctx context.Context, params *ListHoursOfOperationOverridesInput, optFns ...func(*Options)) (*ListHoursOfOperationOverridesOutput, error) {
+ if params == nil {
+ params = &ListHoursOfOperationOverridesInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "ListHoursOfOperationOverrides", params, optFns, c.addOperationListHoursOfOperationOverridesMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*ListHoursOfOperationOverridesOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type ListHoursOfOperationOverridesInput struct {
+
+ // The identifier for the hours of operation
+ //
+ // This member is required.
+ HoursOfOperationId *string
+
+ // The identifier of the Amazon Connect instance.
+ //
+ // This member is required.
+ InstanceId *string
+
+ // The maximum number of results to return per page. The default MaxResult size is
+ // 100. Valid Range: Minimum value of 1. Maximum value of 1000.
+ MaxResults *int32
+
+ // The token for the next set of results. Use the value returned in the previous
+ // response in the next request to retrieve the next set of results.
+ NextToken *string
+
+ noSmithyDocumentSerde
+}
+
+type ListHoursOfOperationOverridesOutput struct {
+
+ // Information about the hours of operation override.
+ HoursOfOperationOverrideList []types.HoursOfOperationOverride
+
+ // The AWS Region where this resource was last modified.
+ LastModifiedRegion *string
+
+ // The timestamp when this resource was last modified.
+ LastModifiedTime *time.Time
+
+ // The token for the next set of results. Use the value returned in the previous
+ // response in the next request to retrieve the next set of results.
+ NextToken *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationListHoursOfOperationOverridesMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpListHoursOfOperationOverrides{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpListHoursOfOperationOverrides{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "ListHoursOfOperationOverrides"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpListHoursOfOperationOverridesValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opListHoursOfOperationOverrides(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+// ListHoursOfOperationOverridesPaginatorOptions is the paginator options for
+// ListHoursOfOperationOverrides
+type ListHoursOfOperationOverridesPaginatorOptions struct {
+ // The maximum number of results to return per page. The default MaxResult size is
+ // 100. Valid Range: Minimum value of 1. Maximum value of 1000.
+ Limit int32
+
+ // Set to true if pagination should stop if the service returns a pagination token
+ // that matches the most recent token provided to the service.
+ StopOnDuplicateToken bool
+}
+
+// ListHoursOfOperationOverridesPaginator is a paginator for
+// ListHoursOfOperationOverrides
+type ListHoursOfOperationOverridesPaginator struct {
+ options ListHoursOfOperationOverridesPaginatorOptions
+ client ListHoursOfOperationOverridesAPIClient
+ params *ListHoursOfOperationOverridesInput
+ nextToken *string
+ firstPage bool
+}
+
+// NewListHoursOfOperationOverridesPaginator returns a new
+// ListHoursOfOperationOverridesPaginator
+func NewListHoursOfOperationOverridesPaginator(client ListHoursOfOperationOverridesAPIClient, params *ListHoursOfOperationOverridesInput, optFns ...func(*ListHoursOfOperationOverridesPaginatorOptions)) *ListHoursOfOperationOverridesPaginator {
+ if params == nil {
+ params = &ListHoursOfOperationOverridesInput{}
+ }
+
+ options := ListHoursOfOperationOverridesPaginatorOptions{}
+ if params.MaxResults != nil {
+ options.Limit = *params.MaxResults
+ }
+
+ for _, fn := range optFns {
+ fn(&options)
+ }
+
+ return &ListHoursOfOperationOverridesPaginator{
+ options: options,
+ client: client,
+ params: params,
+ firstPage: true,
+ nextToken: params.NextToken,
+ }
+}
+
+// HasMorePages returns a boolean indicating whether more pages are available
+func (p *ListHoursOfOperationOverridesPaginator) HasMorePages() bool {
+ return p.firstPage || (p.nextToken != nil && len(*p.nextToken) != 0)
+}
+
+// NextPage retrieves the next ListHoursOfOperationOverrides page.
+func (p *ListHoursOfOperationOverridesPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListHoursOfOperationOverridesOutput, error) {
+ if !p.HasMorePages() {
+ return nil, fmt.Errorf("no more pages available")
+ }
+
+ params := *p.params
+ params.NextToken = p.nextToken
+
+ var limit *int32
+ if p.options.Limit > 0 {
+ limit = &p.options.Limit
+ }
+ params.MaxResults = limit
+
+ optFns = append([]func(*Options){
+ addIsPaginatorUserAgent,
+ }, optFns...)
+ result, err := p.client.ListHoursOfOperationOverrides(ctx, ¶ms, optFns...)
+ if err != nil {
+ return nil, err
+ }
+ p.firstPage = false
+
+ prevToken := p.nextToken
+ p.nextToken = result.NextToken
+
+ if p.options.StopOnDuplicateToken &&
+ prevToken != nil &&
+ p.nextToken != nil &&
+ *prevToken == *p.nextToken {
+ p.nextToken = nil
+ }
+
+ return result, nil
+}
+
+// ListHoursOfOperationOverridesAPIClient is a client that implements the
+// ListHoursOfOperationOverrides operation.
+type ListHoursOfOperationOverridesAPIClient interface {
+ ListHoursOfOperationOverrides(context.Context, *ListHoursOfOperationOverridesInput, ...func(*Options)) (*ListHoursOfOperationOverridesOutput, error)
+}
+
+var _ ListHoursOfOperationOverridesAPIClient = (*Client)(nil)
+
+func newServiceMetadataMiddleware_opListHoursOfOperationOverrides(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "ListHoursOfOperationOverrides",
+ }
+}
diff --git a/service/connect/api_op_SearchHoursOfOperationOverrides.go b/service/connect/api_op_SearchHoursOfOperationOverrides.go
new file mode 100644
index 00000000000..d1354d361c2
--- /dev/null
+++ b/service/connect/api_op_SearchHoursOfOperationOverrides.go
@@ -0,0 +1,277 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package connect
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/connect/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Searches the hours of operation overrides.
+func (c *Client) SearchHoursOfOperationOverrides(ctx context.Context, params *SearchHoursOfOperationOverridesInput, optFns ...func(*Options)) (*SearchHoursOfOperationOverridesOutput, error) {
+ if params == nil {
+ params = &SearchHoursOfOperationOverridesInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "SearchHoursOfOperationOverrides", params, optFns, c.addOperationSearchHoursOfOperationOverridesMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*SearchHoursOfOperationOverridesOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type SearchHoursOfOperationOverridesInput struct {
+
+ // The identifier of the Amazon Connect instance.
+ //
+ // This member is required.
+ InstanceId *string
+
+ // The maximum number of results to return per page. Valid Range: Minimum value of
+ // 1. Maximum value of 100.
+ MaxResults *int32
+
+ // The token for the next set of results. Use the value returned in the previous
+ // response in the next request to retrieve the next set of results. Length
+ // Constraints: Minimum length of 1. Maximum length of 2500.
+ NextToken *string
+
+ // The search criteria to be used to return hours of operations overrides.
+ SearchCriteria *types.HoursOfOperationOverrideSearchCriteria
+
+ // Filters to be applied to search results.
+ SearchFilter *types.HoursOfOperationSearchFilter
+
+ noSmithyDocumentSerde
+}
+
+type SearchHoursOfOperationOverridesOutput struct {
+
+ // The total number of hours of operations which matched your search query.
+ ApproximateTotalCount *int64
+
+ // Information about the hours of operations overrides.
+ HoursOfOperationOverrides []types.HoursOfOperationOverride
+
+ // The token for the next set of results. Use the value returned in the previous
+ // response in the next request to retrieve the next set of results. Length
+ // Constraints: Minimum length of 1. Maximum length of 2500.
+ NextToken *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationSearchHoursOfOperationOverridesMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpSearchHoursOfOperationOverrides{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpSearchHoursOfOperationOverrides{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "SearchHoursOfOperationOverrides"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpSearchHoursOfOperationOverridesValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opSearchHoursOfOperationOverrides(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+// SearchHoursOfOperationOverridesPaginatorOptions is the paginator options for
+// SearchHoursOfOperationOverrides
+type SearchHoursOfOperationOverridesPaginatorOptions struct {
+ // The maximum number of results to return per page. Valid Range: Minimum value of
+ // 1. Maximum value of 100.
+ Limit int32
+
+ // Set to true if pagination should stop if the service returns a pagination token
+ // that matches the most recent token provided to the service.
+ StopOnDuplicateToken bool
+}
+
+// SearchHoursOfOperationOverridesPaginator is a paginator for
+// SearchHoursOfOperationOverrides
+type SearchHoursOfOperationOverridesPaginator struct {
+ options SearchHoursOfOperationOverridesPaginatorOptions
+ client SearchHoursOfOperationOverridesAPIClient
+ params *SearchHoursOfOperationOverridesInput
+ nextToken *string
+ firstPage bool
+}
+
+// NewSearchHoursOfOperationOverridesPaginator returns a new
+// SearchHoursOfOperationOverridesPaginator
+func NewSearchHoursOfOperationOverridesPaginator(client SearchHoursOfOperationOverridesAPIClient, params *SearchHoursOfOperationOverridesInput, optFns ...func(*SearchHoursOfOperationOverridesPaginatorOptions)) *SearchHoursOfOperationOverridesPaginator {
+ if params == nil {
+ params = &SearchHoursOfOperationOverridesInput{}
+ }
+
+ options := SearchHoursOfOperationOverridesPaginatorOptions{}
+ if params.MaxResults != nil {
+ options.Limit = *params.MaxResults
+ }
+
+ for _, fn := range optFns {
+ fn(&options)
+ }
+
+ return &SearchHoursOfOperationOverridesPaginator{
+ options: options,
+ client: client,
+ params: params,
+ firstPage: true,
+ nextToken: params.NextToken,
+ }
+}
+
+// HasMorePages returns a boolean indicating whether more pages are available
+func (p *SearchHoursOfOperationOverridesPaginator) HasMorePages() bool {
+ return p.firstPage || (p.nextToken != nil && len(*p.nextToken) != 0)
+}
+
+// NextPage retrieves the next SearchHoursOfOperationOverrides page.
+func (p *SearchHoursOfOperationOverridesPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*SearchHoursOfOperationOverridesOutput, error) {
+ if !p.HasMorePages() {
+ return nil, fmt.Errorf("no more pages available")
+ }
+
+ params := *p.params
+ params.NextToken = p.nextToken
+
+ var limit *int32
+ if p.options.Limit > 0 {
+ limit = &p.options.Limit
+ }
+ params.MaxResults = limit
+
+ optFns = append([]func(*Options){
+ addIsPaginatorUserAgent,
+ }, optFns...)
+ result, err := p.client.SearchHoursOfOperationOverrides(ctx, ¶ms, optFns...)
+ if err != nil {
+ return nil, err
+ }
+ p.firstPage = false
+
+ prevToken := p.nextToken
+ p.nextToken = result.NextToken
+
+ if p.options.StopOnDuplicateToken &&
+ prevToken != nil &&
+ p.nextToken != nil &&
+ *prevToken == *p.nextToken {
+ p.nextToken = nil
+ }
+
+ return result, nil
+}
+
+// SearchHoursOfOperationOverridesAPIClient is a client that implements the
+// SearchHoursOfOperationOverrides operation.
+type SearchHoursOfOperationOverridesAPIClient interface {
+ SearchHoursOfOperationOverrides(context.Context, *SearchHoursOfOperationOverridesInput, ...func(*Options)) (*SearchHoursOfOperationOverridesOutput, error)
+}
+
+var _ SearchHoursOfOperationOverridesAPIClient = (*Client)(nil)
+
+func newServiceMetadataMiddleware_opSearchHoursOfOperationOverrides(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "SearchHoursOfOperationOverrides",
+ }
+}
diff --git a/service/connect/api_op_StartChatContact.go b/service/connect/api_op_StartChatContact.go
index 9aa9a339b4c..9c59b2850cc 100644
--- a/service/connect/api_op_StartChatContact.go
+++ b/service/connect/api_op_StartChatContact.go
@@ -102,6 +102,10 @@ type StartChatContactInput struct {
// [Making retries safe with idempotent APIs]: https://aws.amazon.com/builders-library/making-retries-safe-with-idempotent-APIs/
ClientToken *string
+ // The customer's identification number. For example, the CustomerId may be a
+ // customer number from your CRM.
+ CustomerId *string
+
// The initial message to be sent to the newly created chat. If you have a Lex bot
// in your flow, the initial message is not delivered to the Lex bot.
InitialMessage *types.ChatMessage
diff --git a/service/connect/api_op_UpdateHoursOfOperationOverride.go b/service/connect/api_op_UpdateHoursOfOperationOverride.go
new file mode 100644
index 00000000000..f5edd1e5b43
--- /dev/null
+++ b/service/connect/api_op_UpdateHoursOfOperationOverride.go
@@ -0,0 +1,179 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package connect
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/connect/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Update the hours of operation override.
+func (c *Client) UpdateHoursOfOperationOverride(ctx context.Context, params *UpdateHoursOfOperationOverrideInput, optFns ...func(*Options)) (*UpdateHoursOfOperationOverrideOutput, error) {
+ if params == nil {
+ params = &UpdateHoursOfOperationOverrideInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "UpdateHoursOfOperationOverride", params, optFns, c.addOperationUpdateHoursOfOperationOverrideMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*UpdateHoursOfOperationOverrideOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type UpdateHoursOfOperationOverrideInput struct {
+
+ // The identifier for the hours of operation.
+ //
+ // This member is required.
+ HoursOfOperationId *string
+
+ // The identifier for the hours of operation override.
+ //
+ // This member is required.
+ HoursOfOperationOverrideId *string
+
+ // The identifier of the Amazon Connect instance.
+ //
+ // This member is required.
+ InstanceId *string
+
+ // Configuration information for the hours of operation override: day, start time,
+ // and end time.
+ Config []types.HoursOfOperationOverrideConfig
+
+ // The description of the hours of operation override.
+ Description *string
+
+ // The date from when the hours of operation override would be effective.
+ EffectiveFrom *string
+
+ // The date till when the hours of operation override would be effective.
+ EffectiveTill *string
+
+ // The name of the hours of operation override.
+ Name *string
+
+ noSmithyDocumentSerde
+}
+
+type UpdateHoursOfOperationOverrideOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationUpdateHoursOfOperationOverrideMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpUpdateHoursOfOperationOverride{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpUpdateHoursOfOperationOverride{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "UpdateHoursOfOperationOverride"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpUpdateHoursOfOperationOverrideValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opUpdateHoursOfOperationOverride(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opUpdateHoursOfOperationOverride(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "UpdateHoursOfOperationOverride",
+ }
+}
diff --git a/service/connect/api_op_UpdateParticipantAuthentication.go b/service/connect/api_op_UpdateParticipantAuthentication.go
new file mode 100644
index 00000000000..7e2b004c6a8
--- /dev/null
+++ b/service/connect/api_op_UpdateParticipantAuthentication.go
@@ -0,0 +1,184 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package connect
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Instructs Amazon Connect to resume the authentication process. The subsequent
+// actions depend on the request body contents:
+//
+// - If a code is provided: Connect retrieves the identity information from
+// Amazon Cognito and imports it into Connect Customer Profiles.
+//
+// - If an error is provided: The error branch of the Authenticate Customer
+// block is executed.
+//
+// The API returns a success response to acknowledge the request. However, the
+// interaction and exchange of identity information occur asynchronously after the
+// response is returned.
+func (c *Client) UpdateParticipantAuthentication(ctx context.Context, params *UpdateParticipantAuthenticationInput, optFns ...func(*Options)) (*UpdateParticipantAuthenticationOutput, error) {
+ if params == nil {
+ params = &UpdateParticipantAuthenticationInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "UpdateParticipantAuthentication", params, optFns, c.addOperationUpdateParticipantAuthenticationMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*UpdateParticipantAuthenticationOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type UpdateParticipantAuthenticationInput struct {
+
+ // The identifier of the Amazon Connect instance. You can [find the instance ID] in the Amazon Resource
+ // Name (ARN) of the instance.
+ //
+ // [find the instance ID]: https://docs.aws.amazon.com/connect/latest/adminguide/find-instance-arn.html
+ //
+ // This member is required.
+ InstanceId *string
+
+ // The state query parameter that was provided by Cognito in the redirectUri . This
+ // will also match the state parameter provided in the AuthenticationUrl from the [GetAuthenticationUrl]
+ // response.
+ //
+ // [GetAuthenticationUrl]: https://docs.aws.amazon.com/connect/latest/APIReference/API_GetAuthenticationUrl.html
+ //
+ // This member is required.
+ State *string
+
+ // The code query parameter provided by Cognito in the redirectUri .
+ Code *string
+
+ // The error query parameter provided by Cognito in the redirectUri .
+ Error *string
+
+ // The error_description parameter provided by Cognito in the redirectUri .
+ ErrorDescription *string
+
+ noSmithyDocumentSerde
+}
+
+type UpdateParticipantAuthenticationOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationUpdateParticipantAuthenticationMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpUpdateParticipantAuthentication{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpUpdateParticipantAuthentication{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "UpdateParticipantAuthentication"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpUpdateParticipantAuthenticationValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opUpdateParticipantAuthentication(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opUpdateParticipantAuthentication(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "UpdateParticipantAuthentication",
+ }
+}
diff --git a/service/connect/deserializers.go b/service/connect/deserializers.go
index 87579ca350b..119b9ecace6 100644
--- a/service/connect/deserializers.go
+++ b/service/connect/deserializers.go
@@ -4569,6 +4569,180 @@ func awsRestjson1_deserializeOpDocumentCreateHoursOfOperationOutput(v **CreateHo
return nil
}
+type awsRestjson1_deserializeOpCreateHoursOfOperationOverride struct {
+}
+
+func (*awsRestjson1_deserializeOpCreateHoursOfOperationOverride) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpCreateHoursOfOperationOverride) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorCreateHoursOfOperationOverride(response, &metadata)
+ }
+ output := &CreateHoursOfOperationOverrideOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentCreateHoursOfOperationOverrideOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorCreateHoursOfOperationOverride(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("DuplicateResourceException", errorCode):
+ return awsRestjson1_deserializeErrorDuplicateResourceException(response, errorBody)
+
+ case strings.EqualFold("InternalServiceException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
+
+ case strings.EqualFold("InvalidParameterException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidParameterException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+ case strings.EqualFold("LimitExceededException", errorCode):
+ return awsRestjson1_deserializeErrorLimitExceededException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentCreateHoursOfOperationOverrideOutput(v **CreateHoursOfOperationOverrideOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *CreateHoursOfOperationOverrideOutput
+ if *v == nil {
+ sv = &CreateHoursOfOperationOverrideOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "HoursOfOperationOverrideId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected HoursOfOperationOverrideId to be of type string, got %T instead", value)
+ }
+ sv.HoursOfOperationOverrideId = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
type awsRestjson1_deserializeOpCreateInstance struct {
}
@@ -9015,6 +9189,112 @@ func awsRestjson1_deserializeOpErrorDeleteHoursOfOperation(response *smithyhttp.
}
}
+type awsRestjson1_deserializeOpDeleteHoursOfOperationOverride struct {
+}
+
+func (*awsRestjson1_deserializeOpDeleteHoursOfOperationOverride) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpDeleteHoursOfOperationOverride) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorDeleteHoursOfOperationOverride(response, &metadata)
+ }
+ output := &DeleteHoursOfOperationOverrideOutput{}
+ out.Result = output
+
+ if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to discard response body, %w", err),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorDeleteHoursOfOperationOverride(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("InternalServiceException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
+
+ case strings.EqualFold("InvalidParameterException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidParameterException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
type awsRestjson1_deserializeOpDeleteInstance struct {
}
@@ -12541,6 +12821,170 @@ func awsRestjson1_deserializeOpDocumentDescribeHoursOfOperationOutput(v **Descri
return nil
}
+type awsRestjson1_deserializeOpDescribeHoursOfOperationOverride struct {
+}
+
+func (*awsRestjson1_deserializeOpDescribeHoursOfOperationOverride) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpDescribeHoursOfOperationOverride) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorDescribeHoursOfOperationOverride(response, &metadata)
+ }
+ output := &DescribeHoursOfOperationOverrideOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentDescribeHoursOfOperationOverrideOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorDescribeHoursOfOperationOverride(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("InternalServiceException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
+
+ case strings.EqualFold("InvalidParameterException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidParameterException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentDescribeHoursOfOperationOverrideOutput(v **DescribeHoursOfOperationOverrideOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *DescribeHoursOfOperationOverrideOutput
+ if *v == nil {
+ sv = &DescribeHoursOfOperationOverrideOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "HoursOfOperationOverride":
+ if err := awsRestjson1_deserializeDocumentHoursOfOperationOverride(&sv.HoursOfOperationOverride, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
type awsRestjson1_deserializeOpDescribeInstance struct {
}
@@ -17599,14 +18043,14 @@ func awsRestjson1_deserializeOpDocumentGetCurrentUserDataOutput(v **GetCurrentUs
return nil
}
-type awsRestjson1_deserializeOpGetFederationToken struct {
+type awsRestjson1_deserializeOpGetEffectiveHoursOfOperations struct {
}
-func (*awsRestjson1_deserializeOpGetFederationToken) ID() string {
+func (*awsRestjson1_deserializeOpGetEffectiveHoursOfOperations) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpGetFederationToken) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpGetEffectiveHoursOfOperations) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -17624,9 +18068,9 @@ func (m *awsRestjson1_deserializeOpGetFederationToken) HandleDeserialize(ctx con
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorGetFederationToken(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorGetEffectiveHoursOfOperations(response, &metadata)
}
- output := &GetFederationTokenOutput{}
+ output := &GetEffectiveHoursOfOperationsOutput{}
out.Result = output
var buff [1024]byte
@@ -17647,7 +18091,7 @@ func (m *awsRestjson1_deserializeOpGetFederationToken) HandleDeserialize(ctx con
return out, metadata, err
}
- err = awsRestjson1_deserializeOpDocumentGetFederationTokenOutput(&output, shape)
+ err = awsRestjson1_deserializeOpDocumentGetEffectiveHoursOfOperationsOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -17661,7 +18105,7 @@ func (m *awsRestjson1_deserializeOpGetFederationToken) HandleDeserialize(ctx con
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorGetFederationToken(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorGetEffectiveHoursOfOperations(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -17702,9 +18146,6 @@ func awsRestjson1_deserializeOpErrorGetFederationToken(response *smithyhttp.Resp
}
switch {
- case strings.EqualFold("DuplicateResourceException", errorCode):
- return awsRestjson1_deserializeErrorDuplicateResourceException(response, errorBody)
-
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
@@ -17717,8 +18158,8 @@ func awsRestjson1_deserializeOpErrorGetFederationToken(response *smithyhttp.Resp
case strings.EqualFold("ResourceNotFoundException", errorCode):
return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
- case strings.EqualFold("UserNotFoundException", errorCode):
- return awsRestjson1_deserializeErrorUserNotFoundException(response, errorBody)
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
default:
genericError := &smithy.GenericAPIError{
@@ -17730,7 +18171,7 @@ func awsRestjson1_deserializeOpErrorGetFederationToken(response *smithyhttp.Resp
}
}
-func awsRestjson1_deserializeOpDocumentGetFederationTokenOutput(v **GetFederationTokenOutput, value interface{}) error {
+func awsRestjson1_deserializeOpDocumentGetEffectiveHoursOfOperationsOutput(v **GetEffectiveHoursOfOperationsOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
}
@@ -17743,45 +18184,27 @@ func awsRestjson1_deserializeOpDocumentGetFederationTokenOutput(v **GetFederatio
return fmt.Errorf("unexpected JSON type %v", value)
}
- var sv *GetFederationTokenOutput
+ var sv *GetEffectiveHoursOfOperationsOutput
if *v == nil {
- sv = &GetFederationTokenOutput{}
+ sv = &GetEffectiveHoursOfOperationsOutput{}
} else {
sv = *v
}
for key, value := range shape {
switch key {
- case "Credentials":
- if err := awsRestjson1_deserializeDocumentCredentials(&sv.Credentials, value); err != nil {
+ case "EffectiveHoursOfOperationList":
+ if err := awsRestjson1_deserializeDocumentEffectiveHoursOfOperationList(&sv.EffectiveHoursOfOperationList, value); err != nil {
return err
}
- case "SignInUrl":
- if value != nil {
- jtv, ok := value.(string)
- if !ok {
- return fmt.Errorf("expected Url to be of type string, got %T instead", value)
- }
- sv.SignInUrl = ptr.String(jtv)
- }
-
- case "UserArn":
- if value != nil {
- jtv, ok := value.(string)
- if !ok {
- return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
- }
- sv.UserArn = ptr.String(jtv)
- }
-
- case "UserId":
+ case "TimeZone":
if value != nil {
jtv, ok := value.(string)
if !ok {
- return fmt.Errorf("expected AgentResourceId to be of type string, got %T instead", value)
+ return fmt.Errorf("expected TimeZone to be of type string, got %T instead", value)
}
- sv.UserId = ptr.String(jtv)
+ sv.TimeZone = ptr.String(jtv)
}
default:
@@ -17793,14 +18216,14 @@ func awsRestjson1_deserializeOpDocumentGetFederationTokenOutput(v **GetFederatio
return nil
}
-type awsRestjson1_deserializeOpGetFlowAssociation struct {
+type awsRestjson1_deserializeOpGetFederationToken struct {
}
-func (*awsRestjson1_deserializeOpGetFlowAssociation) ID() string {
+func (*awsRestjson1_deserializeOpGetFederationToken) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpGetFlowAssociation) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpGetFederationToken) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -17818,9 +18241,9 @@ func (m *awsRestjson1_deserializeOpGetFlowAssociation) HandleDeserialize(ctx con
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorGetFlowAssociation(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorGetFederationToken(response, &metadata)
}
- output := &GetFlowAssociationOutput{}
+ output := &GetFederationTokenOutput{}
out.Result = output
var buff [1024]byte
@@ -17841,7 +18264,7 @@ func (m *awsRestjson1_deserializeOpGetFlowAssociation) HandleDeserialize(ctx con
return out, metadata, err
}
- err = awsRestjson1_deserializeOpDocumentGetFlowAssociationOutput(&output, shape)
+ err = awsRestjson1_deserializeOpDocumentGetFederationTokenOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -17855,7 +18278,7 @@ func (m *awsRestjson1_deserializeOpGetFlowAssociation) HandleDeserialize(ctx con
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorGetFlowAssociation(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorGetFederationToken(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -17896,8 +18319,8 @@ func awsRestjson1_deserializeOpErrorGetFlowAssociation(response *smithyhttp.Resp
}
switch {
- case strings.EqualFold("AccessDeniedException", errorCode):
- return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+ case strings.EqualFold("DuplicateResourceException", errorCode):
+ return awsRestjson1_deserializeErrorDuplicateResourceException(response, errorBody)
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
@@ -17911,8 +18334,8 @@ func awsRestjson1_deserializeOpErrorGetFlowAssociation(response *smithyhttp.Resp
case strings.EqualFold("ResourceNotFoundException", errorCode):
return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
- case strings.EqualFold("ThrottlingException", errorCode):
- return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+ case strings.EqualFold("UserNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorUserNotFoundException(response, errorBody)
default:
genericError := &smithy.GenericAPIError{
@@ -17924,7 +18347,7 @@ func awsRestjson1_deserializeOpErrorGetFlowAssociation(response *smithyhttp.Resp
}
}
-func awsRestjson1_deserializeOpDocumentGetFlowAssociationOutput(v **GetFlowAssociationOutput, value interface{}) error {
+func awsRestjson1_deserializeOpDocumentGetFederationTokenOutput(v **GetFederationTokenOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
}
@@ -17937,40 +18360,45 @@ func awsRestjson1_deserializeOpDocumentGetFlowAssociationOutput(v **GetFlowAssoc
return fmt.Errorf("unexpected JSON type %v", value)
}
- var sv *GetFlowAssociationOutput
+ var sv *GetFederationTokenOutput
if *v == nil {
- sv = &GetFlowAssociationOutput{}
+ sv = &GetFederationTokenOutput{}
} else {
sv = *v
}
for key, value := range shape {
switch key {
- case "FlowId":
+ case "Credentials":
+ if err := awsRestjson1_deserializeDocumentCredentials(&sv.Credentials, value); err != nil {
+ return err
+ }
+
+ case "SignInUrl":
if value != nil {
jtv, ok := value.(string)
if !ok {
- return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
+ return fmt.Errorf("expected Url to be of type string, got %T instead", value)
}
- sv.FlowId = ptr.String(jtv)
+ sv.SignInUrl = ptr.String(jtv)
}
- case "ResourceId":
+ case "UserArn":
if value != nil {
jtv, ok := value.(string)
if !ok {
return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
}
- sv.ResourceId = ptr.String(jtv)
+ sv.UserArn = ptr.String(jtv)
}
- case "ResourceType":
+ case "UserId":
if value != nil {
jtv, ok := value.(string)
if !ok {
- return fmt.Errorf("expected FlowAssociationResourceType to be of type string, got %T instead", value)
+ return fmt.Errorf("expected AgentResourceId to be of type string, got %T instead", value)
}
- sv.ResourceType = types.FlowAssociationResourceType(jtv)
+ sv.UserId = ptr.String(jtv)
}
default:
@@ -17982,14 +18410,14 @@ func awsRestjson1_deserializeOpDocumentGetFlowAssociationOutput(v **GetFlowAssoc
return nil
}
-type awsRestjson1_deserializeOpGetMetricData struct {
+type awsRestjson1_deserializeOpGetFlowAssociation struct {
}
-func (*awsRestjson1_deserializeOpGetMetricData) ID() string {
+func (*awsRestjson1_deserializeOpGetFlowAssociation) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpGetMetricData) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpGetFlowAssociation) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -18007,9 +18435,9 @@ func (m *awsRestjson1_deserializeOpGetMetricData) HandleDeserialize(ctx context.
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorGetMetricData(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorGetFlowAssociation(response, &metadata)
}
- output := &GetMetricDataOutput{}
+ output := &GetFlowAssociationOutput{}
out.Result = output
var buff [1024]byte
@@ -18030,7 +18458,7 @@ func (m *awsRestjson1_deserializeOpGetMetricData) HandleDeserialize(ctx context.
return out, metadata, err
}
- err = awsRestjson1_deserializeOpDocumentGetMetricDataOutput(&output, shape)
+ err = awsRestjson1_deserializeOpDocumentGetFlowAssociationOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -18044,7 +18472,196 @@ func (m *awsRestjson1_deserializeOpGetMetricData) HandleDeserialize(ctx context.
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorGetMetricData(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorGetFlowAssociation(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("InternalServiceException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
+
+ case strings.EqualFold("InvalidParameterException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidParameterException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentGetFlowAssociationOutput(v **GetFlowAssociationOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *GetFlowAssociationOutput
+ if *v == nil {
+ sv = &GetFlowAssociationOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "FlowId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
+ }
+ sv.FlowId = ptr.String(jtv)
+ }
+
+ case "ResourceId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
+ }
+ sv.ResourceId = ptr.String(jtv)
+ }
+
+ case "ResourceType":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected FlowAssociationResourceType to be of type string, got %T instead", value)
+ }
+ sv.ResourceType = types.FlowAssociationResourceType(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+type awsRestjson1_deserializeOpGetMetricData struct {
+}
+
+func (*awsRestjson1_deserializeOpGetMetricData) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpGetMetricData) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorGetMetricData(response, &metadata)
+ }
+ output := &GetMetricDataOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentGetMetricDataOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorGetMetricData(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -21765,14 +22382,14 @@ func awsRestjson1_deserializeOpDocumentListFlowAssociationsOutput(v **ListFlowAs
return nil
}
-type awsRestjson1_deserializeOpListHoursOfOperations struct {
+type awsRestjson1_deserializeOpListHoursOfOperationOverrides struct {
}
-func (*awsRestjson1_deserializeOpListHoursOfOperations) ID() string {
+func (*awsRestjson1_deserializeOpListHoursOfOperationOverrides) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpListHoursOfOperations) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpListHoursOfOperationOverrides) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -21790,9 +22407,9 @@ func (m *awsRestjson1_deserializeOpListHoursOfOperations) HandleDeserialize(ctx
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorListHoursOfOperations(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorListHoursOfOperationOverrides(response, &metadata)
}
- output := &ListHoursOfOperationsOutput{}
+ output := &ListHoursOfOperationOverridesOutput{}
out.Result = output
var buff [1024]byte
@@ -21813,7 +22430,7 @@ func (m *awsRestjson1_deserializeOpListHoursOfOperations) HandleDeserialize(ctx
return out, metadata, err
}
- err = awsRestjson1_deserializeOpDocumentListHoursOfOperationsOutput(&output, shape)
+ err = awsRestjson1_deserializeOpDocumentListHoursOfOperationOverridesOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -21827,7 +22444,7 @@ func (m *awsRestjson1_deserializeOpListHoursOfOperations) HandleDeserialize(ctx
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorListHoursOfOperations(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorListHoursOfOperationOverrides(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -21893,7 +22510,7 @@ func awsRestjson1_deserializeOpErrorListHoursOfOperations(response *smithyhttp.R
}
}
-func awsRestjson1_deserializeOpDocumentListHoursOfOperationsOutput(v **ListHoursOfOperationsOutput, value interface{}) error {
+func awsRestjson1_deserializeOpDocumentListHoursOfOperationOverridesOutput(v **ListHoursOfOperationOverridesOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
}
@@ -21906,20 +22523,45 @@ func awsRestjson1_deserializeOpDocumentListHoursOfOperationsOutput(v **ListHours
return fmt.Errorf("unexpected JSON type %v", value)
}
- var sv *ListHoursOfOperationsOutput
+ var sv *ListHoursOfOperationOverridesOutput
if *v == nil {
- sv = &ListHoursOfOperationsOutput{}
+ sv = &ListHoursOfOperationOverridesOutput{}
} else {
sv = *v
}
for key, value := range shape {
switch key {
- case "HoursOfOperationSummaryList":
- if err := awsRestjson1_deserializeDocumentHoursOfOperationSummaryList(&sv.HoursOfOperationSummaryList, value); err != nil {
+ case "HoursOfOperationOverrideList":
+ if err := awsRestjson1_deserializeDocumentHoursOfOperationOverrideList(&sv.HoursOfOperationOverrideList, value); err != nil {
return err
}
+ case "LastModifiedRegion":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected RegionName to be of type string, got %T instead", value)
+ }
+ sv.LastModifiedRegion = ptr.String(jtv)
+ }
+
+ case "LastModifiedTime":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.LastModifiedTime = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
case "NextToken":
if value != nil {
jtv, ok := value.(string)
@@ -21938,14 +22580,14 @@ func awsRestjson1_deserializeOpDocumentListHoursOfOperationsOutput(v **ListHours
return nil
}
-type awsRestjson1_deserializeOpListInstanceAttributes struct {
+type awsRestjson1_deserializeOpListHoursOfOperations struct {
}
-func (*awsRestjson1_deserializeOpListInstanceAttributes) ID() string {
+func (*awsRestjson1_deserializeOpListHoursOfOperations) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpListInstanceAttributes) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpListHoursOfOperations) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -21963,9 +22605,9 @@ func (m *awsRestjson1_deserializeOpListInstanceAttributes) HandleDeserialize(ctx
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorListInstanceAttributes(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorListHoursOfOperations(response, &metadata)
}
- output := &ListInstanceAttributesOutput{}
+ output := &ListHoursOfOperationsOutput{}
out.Result = output
var buff [1024]byte
@@ -21986,7 +22628,7 @@ func (m *awsRestjson1_deserializeOpListInstanceAttributes) HandleDeserialize(ctx
return out, metadata, err
}
- err = awsRestjson1_deserializeOpDocumentListInstanceAttributesOutput(&output, shape)
+ err = awsRestjson1_deserializeOpDocumentListHoursOfOperationsOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -22000,7 +22642,7 @@ func (m *awsRestjson1_deserializeOpListInstanceAttributes) HandleDeserialize(ctx
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorListInstanceAttributes(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorListHoursOfOperations(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -22066,7 +22708,7 @@ func awsRestjson1_deserializeOpErrorListInstanceAttributes(response *smithyhttp.
}
}
-func awsRestjson1_deserializeOpDocumentListInstanceAttributesOutput(v **ListInstanceAttributesOutput, value interface{}) error {
+func awsRestjson1_deserializeOpDocumentListHoursOfOperationsOutput(v **ListHoursOfOperationsOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
}
@@ -22079,17 +22721,17 @@ func awsRestjson1_deserializeOpDocumentListInstanceAttributesOutput(v **ListInst
return fmt.Errorf("unexpected JSON type %v", value)
}
- var sv *ListInstanceAttributesOutput
+ var sv *ListHoursOfOperationsOutput
if *v == nil {
- sv = &ListInstanceAttributesOutput{}
+ sv = &ListHoursOfOperationsOutput{}
} else {
sv = *v
}
for key, value := range shape {
switch key {
- case "Attributes":
- if err := awsRestjson1_deserializeDocumentAttributesList(&sv.Attributes, value); err != nil {
+ case "HoursOfOperationSummaryList":
+ if err := awsRestjson1_deserializeDocumentHoursOfOperationSummaryList(&sv.HoursOfOperationSummaryList, value); err != nil {
return err
}
@@ -22111,14 +22753,14 @@ func awsRestjson1_deserializeOpDocumentListInstanceAttributesOutput(v **ListInst
return nil
}
-type awsRestjson1_deserializeOpListInstances struct {
+type awsRestjson1_deserializeOpListInstanceAttributes struct {
}
-func (*awsRestjson1_deserializeOpListInstances) ID() string {
+func (*awsRestjson1_deserializeOpListInstanceAttributes) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpListInstances) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpListInstanceAttributes) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -22136,9 +22778,9 @@ func (m *awsRestjson1_deserializeOpListInstances) HandleDeserialize(ctx context.
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorListInstances(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorListInstanceAttributes(response, &metadata)
}
- output := &ListInstancesOutput{}
+ output := &ListInstanceAttributesOutput{}
out.Result = output
var buff [1024]byte
@@ -22159,7 +22801,7 @@ func (m *awsRestjson1_deserializeOpListInstances) HandleDeserialize(ctx context.
return out, metadata, err
}
- err = awsRestjson1_deserializeOpDocumentListInstancesOutput(&output, shape)
+ err = awsRestjson1_deserializeOpDocumentListInstanceAttributesOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -22173,7 +22815,180 @@ func (m *awsRestjson1_deserializeOpListInstances) HandleDeserialize(ctx context.
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorListInstances(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorListInstanceAttributes(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("InternalServiceException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
+
+ case strings.EqualFold("InvalidParameterException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidParameterException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentListInstanceAttributesOutput(v **ListInstanceAttributesOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *ListInstanceAttributesOutput
+ if *v == nil {
+ sv = &ListInstanceAttributesOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "Attributes":
+ if err := awsRestjson1_deserializeDocumentAttributesList(&sv.Attributes, value); err != nil {
+ return err
+ }
+
+ case "NextToken":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected NextToken to be of type string, got %T instead", value)
+ }
+ sv.NextToken = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+type awsRestjson1_deserializeOpListInstances struct {
+}
+
+func (*awsRestjson1_deserializeOpListInstances) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpListInstances) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorListInstances(response, &metadata)
+ }
+ output := &ListInstancesOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentListInstancesOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorListInstances(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -29422,14 +30237,14 @@ func awsRestjson1_deserializeOpDocumentSearchEmailAddressesOutput(v **SearchEmai
return nil
}
-type awsRestjson1_deserializeOpSearchHoursOfOperations struct {
+type awsRestjson1_deserializeOpSearchHoursOfOperationOverrides struct {
}
-func (*awsRestjson1_deserializeOpSearchHoursOfOperations) ID() string {
+func (*awsRestjson1_deserializeOpSearchHoursOfOperationOverrides) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpSearchHoursOfOperations) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpSearchHoursOfOperationOverrides) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -29447,9 +30262,9 @@ func (m *awsRestjson1_deserializeOpSearchHoursOfOperations) HandleDeserialize(ct
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorSearchHoursOfOperations(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorSearchHoursOfOperationOverrides(response, &metadata)
}
- output := &SearchHoursOfOperationsOutput{}
+ output := &SearchHoursOfOperationOverridesOutput{}
out.Result = output
var buff [1024]byte
@@ -29470,7 +30285,7 @@ func (m *awsRestjson1_deserializeOpSearchHoursOfOperations) HandleDeserialize(ct
return out, metadata, err
}
- err = awsRestjson1_deserializeOpDocumentSearchHoursOfOperationsOutput(&output, shape)
+ err = awsRestjson1_deserializeOpDocumentSearchHoursOfOperationOverridesOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -29484,7 +30299,7 @@ func (m *awsRestjson1_deserializeOpSearchHoursOfOperations) HandleDeserialize(ct
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorSearchHoursOfOperations(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorSearchHoursOfOperationOverrides(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -29550,7 +30365,7 @@ func awsRestjson1_deserializeOpErrorSearchHoursOfOperations(response *smithyhttp
}
}
-func awsRestjson1_deserializeOpDocumentSearchHoursOfOperationsOutput(v **SearchHoursOfOperationsOutput, value interface{}) error {
+func awsRestjson1_deserializeOpDocumentSearchHoursOfOperationOverridesOutput(v **SearchHoursOfOperationOverridesOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
}
@@ -29563,9 +30378,9 @@ func awsRestjson1_deserializeOpDocumentSearchHoursOfOperationsOutput(v **SearchH
return fmt.Errorf("unexpected JSON type %v", value)
}
- var sv *SearchHoursOfOperationsOutput
+ var sv *SearchHoursOfOperationOverridesOutput
if *v == nil {
- sv = &SearchHoursOfOperationsOutput{}
+ sv = &SearchHoursOfOperationOverridesOutput{}
} else {
sv = *v
}
@@ -29585,8 +30400,8 @@ func awsRestjson1_deserializeOpDocumentSearchHoursOfOperationsOutput(v **SearchH
sv.ApproximateTotalCount = ptr.Int64(i64)
}
- case "HoursOfOperations":
- if err := awsRestjson1_deserializeDocumentHoursOfOperationList(&sv.HoursOfOperations, value); err != nil {
+ case "HoursOfOperationOverrides":
+ if err := awsRestjson1_deserializeDocumentHoursOfOperationOverrideList(&sv.HoursOfOperationOverrides, value); err != nil {
return err
}
@@ -29608,14 +30423,14 @@ func awsRestjson1_deserializeOpDocumentSearchHoursOfOperationsOutput(v **SearchH
return nil
}
-type awsRestjson1_deserializeOpSearchPredefinedAttributes struct {
+type awsRestjson1_deserializeOpSearchHoursOfOperations struct {
}
-func (*awsRestjson1_deserializeOpSearchPredefinedAttributes) ID() string {
+func (*awsRestjson1_deserializeOpSearchHoursOfOperations) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpSearchPredefinedAttributes) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpSearchHoursOfOperations) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -29633,9 +30448,9 @@ func (m *awsRestjson1_deserializeOpSearchPredefinedAttributes) HandleDeserialize
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorSearchPredefinedAttributes(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorSearchHoursOfOperations(response, &metadata)
}
- output := &SearchPredefinedAttributesOutput{}
+ output := &SearchHoursOfOperationsOutput{}
out.Result = output
var buff [1024]byte
@@ -29656,7 +30471,7 @@ func (m *awsRestjson1_deserializeOpSearchPredefinedAttributes) HandleDeserialize
return out, metadata, err
}
- err = awsRestjson1_deserializeOpDocumentSearchPredefinedAttributesOutput(&output, shape)
+ err = awsRestjson1_deserializeOpDocumentSearchHoursOfOperationsOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -29670,7 +30485,7 @@ func (m *awsRestjson1_deserializeOpSearchPredefinedAttributes) HandleDeserialize
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorSearchPredefinedAttributes(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorSearchHoursOfOperations(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -29736,7 +30551,7 @@ func awsRestjson1_deserializeOpErrorSearchPredefinedAttributes(response *smithyh
}
}
-func awsRestjson1_deserializeOpDocumentSearchPredefinedAttributesOutput(v **SearchPredefinedAttributesOutput, value interface{}) error {
+func awsRestjson1_deserializeOpDocumentSearchHoursOfOperationsOutput(v **SearchHoursOfOperationsOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
}
@@ -29749,9 +30564,9 @@ func awsRestjson1_deserializeOpDocumentSearchPredefinedAttributesOutput(v **Sear
return fmt.Errorf("unexpected JSON type %v", value)
}
- var sv *SearchPredefinedAttributesOutput
+ var sv *SearchHoursOfOperationsOutput
if *v == nil {
- sv = &SearchPredefinedAttributesOutput{}
+ sv = &SearchHoursOfOperationsOutput{}
} else {
sv = *v
}
@@ -29771,6 +30586,11 @@ func awsRestjson1_deserializeOpDocumentSearchPredefinedAttributesOutput(v **Sear
sv.ApproximateTotalCount = ptr.Int64(i64)
}
+ case "HoursOfOperations":
+ if err := awsRestjson1_deserializeDocumentHoursOfOperationList(&sv.HoursOfOperations, value); err != nil {
+ return err
+ }
+
case "NextToken":
if value != nil {
jtv, ok := value.(string)
@@ -29780,11 +30600,6 @@ func awsRestjson1_deserializeOpDocumentSearchPredefinedAttributesOutput(v **Sear
sv.NextToken = ptr.String(jtv)
}
- case "PredefinedAttributes":
- if err := awsRestjson1_deserializeDocumentPredefinedAttributeSearchSummaryList(&sv.PredefinedAttributes, value); err != nil {
- return err
- }
-
default:
_, _ = key, value
@@ -29794,14 +30609,14 @@ func awsRestjson1_deserializeOpDocumentSearchPredefinedAttributesOutput(v **Sear
return nil
}
-type awsRestjson1_deserializeOpSearchPrompts struct {
+type awsRestjson1_deserializeOpSearchPredefinedAttributes struct {
}
-func (*awsRestjson1_deserializeOpSearchPrompts) ID() string {
+func (*awsRestjson1_deserializeOpSearchPredefinedAttributes) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpSearchPrompts) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpSearchPredefinedAttributes) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -29819,9 +30634,9 @@ func (m *awsRestjson1_deserializeOpSearchPrompts) HandleDeserialize(ctx context.
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorSearchPrompts(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorSearchPredefinedAttributes(response, &metadata)
}
- output := &SearchPromptsOutput{}
+ output := &SearchPredefinedAttributesOutput{}
out.Result = output
var buff [1024]byte
@@ -29842,7 +30657,7 @@ func (m *awsRestjson1_deserializeOpSearchPrompts) HandleDeserialize(ctx context.
return out, metadata, err
}
- err = awsRestjson1_deserializeOpDocumentSearchPromptsOutput(&output, shape)
+ err = awsRestjson1_deserializeOpDocumentSearchPredefinedAttributesOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -29856,7 +30671,7 @@ func (m *awsRestjson1_deserializeOpSearchPrompts) HandleDeserialize(ctx context.
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorSearchPrompts(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorSearchPredefinedAttributes(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -29922,7 +30737,7 @@ func awsRestjson1_deserializeOpErrorSearchPrompts(response *smithyhttp.Response,
}
}
-func awsRestjson1_deserializeOpDocumentSearchPromptsOutput(v **SearchPromptsOutput, value interface{}) error {
+func awsRestjson1_deserializeOpDocumentSearchPredefinedAttributesOutput(v **SearchPredefinedAttributesOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
}
@@ -29935,9 +30750,9 @@ func awsRestjson1_deserializeOpDocumentSearchPromptsOutput(v **SearchPromptsOutp
return fmt.Errorf("unexpected JSON type %v", value)
}
- var sv *SearchPromptsOutput
+ var sv *SearchPredefinedAttributesOutput
if *v == nil {
- sv = &SearchPromptsOutput{}
+ sv = &SearchPredefinedAttributesOutput{}
} else {
sv = *v
}
@@ -29966,8 +30781,8 @@ func awsRestjson1_deserializeOpDocumentSearchPromptsOutput(v **SearchPromptsOutp
sv.NextToken = ptr.String(jtv)
}
- case "Prompts":
- if err := awsRestjson1_deserializeDocumentPromptList(&sv.Prompts, value); err != nil {
+ case "PredefinedAttributes":
+ if err := awsRestjson1_deserializeDocumentPredefinedAttributeSearchSummaryList(&sv.PredefinedAttributes, value); err != nil {
return err
}
@@ -29980,14 +30795,14 @@ func awsRestjson1_deserializeOpDocumentSearchPromptsOutput(v **SearchPromptsOutp
return nil
}
-type awsRestjson1_deserializeOpSearchQueues struct {
+type awsRestjson1_deserializeOpSearchPrompts struct {
}
-func (*awsRestjson1_deserializeOpSearchQueues) ID() string {
+func (*awsRestjson1_deserializeOpSearchPrompts) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpSearchQueues) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpSearchPrompts) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -30005,9 +30820,9 @@ func (m *awsRestjson1_deserializeOpSearchQueues) HandleDeserialize(ctx context.C
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorSearchQueues(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorSearchPrompts(response, &metadata)
}
- output := &SearchQueuesOutput{}
+ output := &SearchPromptsOutput{}
out.Result = output
var buff [1024]byte
@@ -30028,7 +30843,7 @@ func (m *awsRestjson1_deserializeOpSearchQueues) HandleDeserialize(ctx context.C
return out, metadata, err
}
- err = awsRestjson1_deserializeOpDocumentSearchQueuesOutput(&output, shape)
+ err = awsRestjson1_deserializeOpDocumentSearchPromptsOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -30042,7 +30857,7 @@ func (m *awsRestjson1_deserializeOpSearchQueues) HandleDeserialize(ctx context.C
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorSearchQueues(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorSearchPrompts(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -30108,7 +30923,7 @@ func awsRestjson1_deserializeOpErrorSearchQueues(response *smithyhttp.Response,
}
}
-func awsRestjson1_deserializeOpDocumentSearchQueuesOutput(v **SearchQueuesOutput, value interface{}) error {
+func awsRestjson1_deserializeOpDocumentSearchPromptsOutput(v **SearchPromptsOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
}
@@ -30121,9 +30936,195 @@ func awsRestjson1_deserializeOpDocumentSearchQueuesOutput(v **SearchQueuesOutput
return fmt.Errorf("unexpected JSON type %v", value)
}
- var sv *SearchQueuesOutput
+ var sv *SearchPromptsOutput
if *v == nil {
- sv = &SearchQueuesOutput{}
+ sv = &SearchPromptsOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "ApproximateTotalCount":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected ApproximateTotalCount to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.ApproximateTotalCount = ptr.Int64(i64)
+ }
+
+ case "NextToken":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected NextToken2500 to be of type string, got %T instead", value)
+ }
+ sv.NextToken = ptr.String(jtv)
+ }
+
+ case "Prompts":
+ if err := awsRestjson1_deserializeDocumentPromptList(&sv.Prompts, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+type awsRestjson1_deserializeOpSearchQueues struct {
+}
+
+func (*awsRestjson1_deserializeOpSearchQueues) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpSearchQueues) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorSearchQueues(response, &metadata)
+ }
+ output := &SearchQueuesOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentSearchQueuesOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorSearchQueues(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("InternalServiceException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
+
+ case strings.EqualFold("InvalidParameterException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidParameterException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentSearchQueuesOutput(v **SearchQueuesOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *SearchQueuesOutput
+ if *v == nil {
+ sv = &SearchQueuesOutput{}
} else {
sv = *v
}
@@ -36698,14 +37699,14 @@ func awsRestjson1_deserializeOpErrorUpdateHoursOfOperation(response *smithyhttp.
}
}
-type awsRestjson1_deserializeOpUpdateInstanceAttribute struct {
+type awsRestjson1_deserializeOpUpdateHoursOfOperationOverride struct {
}
-func (*awsRestjson1_deserializeOpUpdateInstanceAttribute) ID() string {
+func (*awsRestjson1_deserializeOpUpdateHoursOfOperationOverride) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateInstanceAttribute) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateHoursOfOperationOverride) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -36723,9 +37724,9 @@ func (m *awsRestjson1_deserializeOpUpdateInstanceAttribute) HandleDeserialize(ct
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateInstanceAttribute(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateHoursOfOperationOverride(response, &metadata)
}
- output := &UpdateInstanceAttributeOutput{}
+ output := &UpdateHoursOfOperationOverrideOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -36738,7 +37739,7 @@ func (m *awsRestjson1_deserializeOpUpdateInstanceAttribute) HandleDeserialize(ct
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateInstanceAttribute(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateHoursOfOperationOverride(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -36779,6 +37780,12 @@ func awsRestjson1_deserializeOpErrorUpdateInstanceAttribute(response *smithyhttp
}
switch {
+ case strings.EqualFold("ConditionalOperationFailedException", errorCode):
+ return awsRestjson1_deserializeErrorConditionalOperationFailedException(response, errorBody)
+
+ case strings.EqualFold("DuplicateResourceException", errorCode):
+ return awsRestjson1_deserializeErrorDuplicateResourceException(response, errorBody)
+
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
@@ -36804,14 +37811,14 @@ func awsRestjson1_deserializeOpErrorUpdateInstanceAttribute(response *smithyhttp
}
}
-type awsRestjson1_deserializeOpUpdateInstanceStorageConfig struct {
+type awsRestjson1_deserializeOpUpdateInstanceAttribute struct {
}
-func (*awsRestjson1_deserializeOpUpdateInstanceStorageConfig) ID() string {
+func (*awsRestjson1_deserializeOpUpdateInstanceAttribute) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateInstanceStorageConfig) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateInstanceAttribute) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -36829,9 +37836,9 @@ func (m *awsRestjson1_deserializeOpUpdateInstanceStorageConfig) HandleDeserializ
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateInstanceStorageConfig(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateInstanceAttribute(response, &metadata)
}
- output := &UpdateInstanceStorageConfigOutput{}
+ output := &UpdateInstanceAttributeOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -36844,7 +37851,7 @@ func (m *awsRestjson1_deserializeOpUpdateInstanceStorageConfig) HandleDeserializ
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateInstanceStorageConfig(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateInstanceAttribute(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -36910,14 +37917,14 @@ func awsRestjson1_deserializeOpErrorUpdateInstanceStorageConfig(response *smithy
}
}
-type awsRestjson1_deserializeOpUpdateParticipantRoleConfig struct {
+type awsRestjson1_deserializeOpUpdateInstanceStorageConfig struct {
}
-func (*awsRestjson1_deserializeOpUpdateParticipantRoleConfig) ID() string {
+func (*awsRestjson1_deserializeOpUpdateInstanceStorageConfig) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateParticipantRoleConfig) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateInstanceStorageConfig) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -36935,16 +37942,22 @@ func (m *awsRestjson1_deserializeOpUpdateParticipantRoleConfig) HandleDeserializ
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateParticipantRoleConfig(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateInstanceStorageConfig(response, &metadata)
}
- output := &UpdateParticipantRoleConfigOutput{}
+ output := &UpdateInstanceStorageConfigOutput{}
out.Result = output
+ if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to discard response body, %w", err),
+ }
+ }
+
span.End()
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateParticipantRoleConfig(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateInstanceStorageConfig(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -36985,9 +37998,6 @@ func awsRestjson1_deserializeOpErrorUpdateParticipantRoleConfig(response *smithy
}
switch {
- case strings.EqualFold("AccessDeniedException", errorCode):
- return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
-
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
@@ -37013,14 +38023,14 @@ func awsRestjson1_deserializeOpErrorUpdateParticipantRoleConfig(response *smithy
}
}
-type awsRestjson1_deserializeOpUpdatePhoneNumber struct {
+type awsRestjson1_deserializeOpUpdateParticipantAuthentication struct {
}
-func (*awsRestjson1_deserializeOpUpdatePhoneNumber) ID() string {
+func (*awsRestjson1_deserializeOpUpdateParticipantAuthentication) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdatePhoneNumber) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateParticipantAuthentication) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -37038,44 +38048,16 @@ func (m *awsRestjson1_deserializeOpUpdatePhoneNumber) HandleDeserialize(ctx cont
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdatePhoneNumber(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateParticipantAuthentication(response, &metadata)
}
- output := &UpdatePhoneNumberOutput{}
+ output := &UpdateParticipantAuthenticationOutput{}
out.Result = output
- var buff [1024]byte
- ringBuffer := smithyio.NewRingBuffer(buff[:])
-
- body := io.TeeReader(response.Body, ringBuffer)
-
- decoder := json.NewDecoder(body)
- decoder.UseNumber()
- var shape interface{}
- if err := decoder.Decode(&shape); err != nil && err != io.EOF {
- var snapshot bytes.Buffer
- io.Copy(&snapshot, ringBuffer)
- err = &smithy.DeserializationError{
- Err: fmt.Errorf("failed to decode response body, %w", err),
- Snapshot: snapshot.Bytes(),
- }
- return out, metadata, err
- }
-
- err = awsRestjson1_deserializeOpDocumentUpdatePhoneNumberOutput(&output, shape)
- if err != nil {
- var snapshot bytes.Buffer
- io.Copy(&snapshot, ringBuffer)
- return out, metadata, &smithy.DeserializationError{
- Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
- Snapshot: snapshot.Bytes(),
- }
- }
-
span.End()
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdatePhoneNumber(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateParticipantAuthentication(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -37119,8 +38101,8 @@ func awsRestjson1_deserializeOpErrorUpdatePhoneNumber(response *smithyhttp.Respo
case strings.EqualFold("AccessDeniedException", errorCode):
return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
- case strings.EqualFold("IdempotencyException", errorCode):
- return awsRestjson1_deserializeErrorIdempotencyException(response, errorBody)
+ case strings.EqualFold("ConflictException", errorCode):
+ return awsRestjson1_deserializeErrorConflictException(response, errorBody)
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
@@ -37128,11 +38110,8 @@ func awsRestjson1_deserializeOpErrorUpdatePhoneNumber(response *smithyhttp.Respo
case strings.EqualFold("InvalidParameterException", errorCode):
return awsRestjson1_deserializeErrorInvalidParameterException(response, errorBody)
- case strings.EqualFold("ResourceInUseException", errorCode):
- return awsRestjson1_deserializeErrorResourceInUseException(response, errorBody)
-
- case strings.EqualFold("ResourceNotFoundException", errorCode):
- return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
case strings.EqualFold("ThrottlingException", errorCode):
return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
@@ -37147,63 +38126,14 @@ func awsRestjson1_deserializeOpErrorUpdatePhoneNumber(response *smithyhttp.Respo
}
}
-func awsRestjson1_deserializeOpDocumentUpdatePhoneNumberOutput(v **UpdatePhoneNumberOutput, value interface{}) error {
- if v == nil {
- return fmt.Errorf("unexpected nil of type %T", v)
- }
- if value == nil {
- return nil
- }
-
- shape, ok := value.(map[string]interface{})
- if !ok {
- return fmt.Errorf("unexpected JSON type %v", value)
- }
-
- var sv *UpdatePhoneNumberOutput
- if *v == nil {
- sv = &UpdatePhoneNumberOutput{}
- } else {
- sv = *v
- }
-
- for key, value := range shape {
- switch key {
- case "PhoneNumberArn":
- if value != nil {
- jtv, ok := value.(string)
- if !ok {
- return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
- }
- sv.PhoneNumberArn = ptr.String(jtv)
- }
-
- case "PhoneNumberId":
- if value != nil {
- jtv, ok := value.(string)
- if !ok {
- return fmt.Errorf("expected PhoneNumberId to be of type string, got %T instead", value)
- }
- sv.PhoneNumberId = ptr.String(jtv)
- }
-
- default:
- _, _ = key, value
-
- }
- }
- *v = sv
- return nil
-}
-
-type awsRestjson1_deserializeOpUpdatePhoneNumberMetadata struct {
+type awsRestjson1_deserializeOpUpdateParticipantRoleConfig struct {
}
-func (*awsRestjson1_deserializeOpUpdatePhoneNumberMetadata) ID() string {
+func (*awsRestjson1_deserializeOpUpdateParticipantRoleConfig) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdatePhoneNumberMetadata) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateParticipantRoleConfig) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -37221,22 +38151,16 @@ func (m *awsRestjson1_deserializeOpUpdatePhoneNumberMetadata) HandleDeserialize(
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdatePhoneNumberMetadata(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateParticipantRoleConfig(response, &metadata)
}
- output := &UpdatePhoneNumberMetadataOutput{}
+ output := &UpdateParticipantRoleConfigOutput{}
out.Result = output
- if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
- return out, metadata, &smithy.DeserializationError{
- Err: fmt.Errorf("failed to discard response body, %w", err),
- }
- }
-
span.End()
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdatePhoneNumberMetadata(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateParticipantRoleConfig(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -37280,9 +38204,6 @@ func awsRestjson1_deserializeOpErrorUpdatePhoneNumberMetadata(response *smithyht
case strings.EqualFold("AccessDeniedException", errorCode):
return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
- case strings.EqualFold("IdempotencyException", errorCode):
- return awsRestjson1_deserializeErrorIdempotencyException(response, errorBody)
-
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
@@ -37292,9 +38213,6 @@ func awsRestjson1_deserializeOpErrorUpdatePhoneNumberMetadata(response *smithyht
case strings.EqualFold("InvalidRequestException", errorCode):
return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
- case strings.EqualFold("ResourceInUseException", errorCode):
- return awsRestjson1_deserializeErrorResourceInUseException(response, errorBody)
-
case strings.EqualFold("ResourceNotFoundException", errorCode):
return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
@@ -37311,120 +38229,14 @@ func awsRestjson1_deserializeOpErrorUpdatePhoneNumberMetadata(response *smithyht
}
}
-type awsRestjson1_deserializeOpUpdatePredefinedAttribute struct {
-}
-
-func (*awsRestjson1_deserializeOpUpdatePredefinedAttribute) ID() string {
- return "OperationDeserializer"
-}
-
-func (m *awsRestjson1_deserializeOpUpdatePredefinedAttribute) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
- out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
-) {
- out, metadata, err = next.HandleDeserialize(ctx, in)
- if err != nil {
- return out, metadata, err
- }
-
- _, span := tracing.StartSpan(ctx, "OperationDeserializer")
- endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
- defer endTimer()
- defer span.End()
- response, ok := out.RawResponse.(*smithyhttp.Response)
- if !ok {
- return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
- }
-
- if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdatePredefinedAttribute(response, &metadata)
- }
- output := &UpdatePredefinedAttributeOutput{}
- out.Result = output
-
- if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
- return out, metadata, &smithy.DeserializationError{
- Err: fmt.Errorf("failed to discard response body, %w", err),
- }
- }
-
- span.End()
- return out, metadata, err
-}
-
-func awsRestjson1_deserializeOpErrorUpdatePredefinedAttribute(response *smithyhttp.Response, metadata *middleware.Metadata) error {
- var errorBuffer bytes.Buffer
- if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
- return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
- }
- errorBody := bytes.NewReader(errorBuffer.Bytes())
-
- errorCode := "UnknownError"
- errorMessage := errorCode
-
- headerCode := response.Header.Get("X-Amzn-ErrorType")
- if len(headerCode) != 0 {
- errorCode = restjson.SanitizeErrorCode(headerCode)
- }
-
- var buff [1024]byte
- ringBuffer := smithyio.NewRingBuffer(buff[:])
-
- body := io.TeeReader(errorBody, ringBuffer)
- decoder := json.NewDecoder(body)
- decoder.UseNumber()
- jsonCode, message, err := restjson.GetErrorInfo(decoder)
- if err != nil {
- var snapshot bytes.Buffer
- io.Copy(&snapshot, ringBuffer)
- err = &smithy.DeserializationError{
- Err: fmt.Errorf("failed to decode response body, %w", err),
- Snapshot: snapshot.Bytes(),
- }
- return err
- }
-
- errorBody.Seek(0, io.SeekStart)
- if len(headerCode) == 0 && len(jsonCode) != 0 {
- errorCode = restjson.SanitizeErrorCode(jsonCode)
- }
- if len(message) != 0 {
- errorMessage = message
- }
-
- switch {
- case strings.EqualFold("InternalServiceException", errorCode):
- return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
-
- case strings.EqualFold("InvalidParameterException", errorCode):
- return awsRestjson1_deserializeErrorInvalidParameterException(response, errorBody)
-
- case strings.EqualFold("InvalidRequestException", errorCode):
- return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
-
- case strings.EqualFold("ResourceNotFoundException", errorCode):
- return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
-
- case strings.EqualFold("ThrottlingException", errorCode):
- return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
-
- default:
- genericError := &smithy.GenericAPIError{
- Code: errorCode,
- Message: errorMessage,
- }
- return genericError
-
- }
-}
-
-type awsRestjson1_deserializeOpUpdatePrompt struct {
+type awsRestjson1_deserializeOpUpdatePhoneNumber struct {
}
-func (*awsRestjson1_deserializeOpUpdatePrompt) ID() string {
+func (*awsRestjson1_deserializeOpUpdatePhoneNumber) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdatePrompt) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdatePhoneNumber) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -37442,9 +38254,9 @@ func (m *awsRestjson1_deserializeOpUpdatePrompt) HandleDeserialize(ctx context.C
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdatePrompt(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdatePhoneNumber(response, &metadata)
}
- output := &UpdatePromptOutput{}
+ output := &UpdatePhoneNumberOutput{}
out.Result = output
var buff [1024]byte
@@ -37465,7 +38277,7 @@ func (m *awsRestjson1_deserializeOpUpdatePrompt) HandleDeserialize(ctx context.C
return out, metadata, err
}
- err = awsRestjson1_deserializeOpDocumentUpdatePromptOutput(&output, shape)
+ err = awsRestjson1_deserializeOpDocumentUpdatePhoneNumberOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -37479,7 +38291,7 @@ func (m *awsRestjson1_deserializeOpUpdatePrompt) HandleDeserialize(ctx context.C
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdatePrompt(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdatePhoneNumber(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -37520,14 +38332,20 @@ func awsRestjson1_deserializeOpErrorUpdatePrompt(response *smithyhttp.Response,
}
switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("IdempotencyException", errorCode):
+ return awsRestjson1_deserializeErrorIdempotencyException(response, errorBody)
+
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
case strings.EqualFold("InvalidParameterException", errorCode):
return awsRestjson1_deserializeErrorInvalidParameterException(response, errorBody)
- case strings.EqualFold("InvalidRequestException", errorCode):
- return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+ case strings.EqualFold("ResourceInUseException", errorCode):
+ return awsRestjson1_deserializeErrorResourceInUseException(response, errorBody)
case strings.EqualFold("ResourceNotFoundException", errorCode):
return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
@@ -37545,7 +38363,7 @@ func awsRestjson1_deserializeOpErrorUpdatePrompt(response *smithyhttp.Response,
}
}
-func awsRestjson1_deserializeOpDocumentUpdatePromptOutput(v **UpdatePromptOutput, value interface{}) error {
+func awsRestjson1_deserializeOpDocumentUpdatePhoneNumberOutput(v **UpdatePhoneNumberOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
}
@@ -37558,31 +38376,31 @@ func awsRestjson1_deserializeOpDocumentUpdatePromptOutput(v **UpdatePromptOutput
return fmt.Errorf("unexpected JSON type %v", value)
}
- var sv *UpdatePromptOutput
+ var sv *UpdatePhoneNumberOutput
if *v == nil {
- sv = &UpdatePromptOutput{}
+ sv = &UpdatePhoneNumberOutput{}
} else {
sv = *v
}
for key, value := range shape {
switch key {
- case "PromptARN":
+ case "PhoneNumberArn":
if value != nil {
jtv, ok := value.(string)
if !ok {
return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
}
- sv.PromptARN = ptr.String(jtv)
+ sv.PhoneNumberArn = ptr.String(jtv)
}
- case "PromptId":
+ case "PhoneNumberId":
if value != nil {
jtv, ok := value.(string)
if !ok {
- return fmt.Errorf("expected PromptId to be of type string, got %T instead", value)
+ return fmt.Errorf("expected PhoneNumberId to be of type string, got %T instead", value)
}
- sv.PromptId = ptr.String(jtv)
+ sv.PhoneNumberId = ptr.String(jtv)
}
default:
@@ -37594,14 +38412,14 @@ func awsRestjson1_deserializeOpDocumentUpdatePromptOutput(v **UpdatePromptOutput
return nil
}
-type awsRestjson1_deserializeOpUpdateQueueHoursOfOperation struct {
+type awsRestjson1_deserializeOpUpdatePhoneNumberMetadata struct {
}
-func (*awsRestjson1_deserializeOpUpdateQueueHoursOfOperation) ID() string {
+func (*awsRestjson1_deserializeOpUpdatePhoneNumberMetadata) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateQueueHoursOfOperation) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdatePhoneNumberMetadata) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -37619,9 +38437,9 @@ func (m *awsRestjson1_deserializeOpUpdateQueueHoursOfOperation) HandleDeserializ
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateQueueHoursOfOperation(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdatePhoneNumberMetadata(response, &metadata)
}
- output := &UpdateQueueHoursOfOperationOutput{}
+ output := &UpdatePhoneNumberMetadataOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -37634,7 +38452,7 @@ func (m *awsRestjson1_deserializeOpUpdateQueueHoursOfOperation) HandleDeserializ
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateQueueHoursOfOperation(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdatePhoneNumberMetadata(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -37675,6 +38493,12 @@ func awsRestjson1_deserializeOpErrorUpdateQueueHoursOfOperation(response *smithy
}
switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("IdempotencyException", errorCode):
+ return awsRestjson1_deserializeErrorIdempotencyException(response, errorBody)
+
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
@@ -37684,6 +38508,9 @@ func awsRestjson1_deserializeOpErrorUpdateQueueHoursOfOperation(response *smithy
case strings.EqualFold("InvalidRequestException", errorCode):
return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+ case strings.EqualFold("ResourceInUseException", errorCode):
+ return awsRestjson1_deserializeErrorResourceInUseException(response, errorBody)
+
case strings.EqualFold("ResourceNotFoundException", errorCode):
return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
@@ -37700,14 +38527,14 @@ func awsRestjson1_deserializeOpErrorUpdateQueueHoursOfOperation(response *smithy
}
}
-type awsRestjson1_deserializeOpUpdateQueueMaxContacts struct {
+type awsRestjson1_deserializeOpUpdatePredefinedAttribute struct {
}
-func (*awsRestjson1_deserializeOpUpdateQueueMaxContacts) ID() string {
+func (*awsRestjson1_deserializeOpUpdatePredefinedAttribute) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateQueueMaxContacts) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdatePredefinedAttribute) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -37725,9 +38552,9 @@ func (m *awsRestjson1_deserializeOpUpdateQueueMaxContacts) HandleDeserialize(ctx
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateQueueMaxContacts(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdatePredefinedAttribute(response, &metadata)
}
- output := &UpdateQueueMaxContactsOutput{}
+ output := &UpdatePredefinedAttributeOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -37740,7 +38567,7 @@ func (m *awsRestjson1_deserializeOpUpdateQueueMaxContacts) HandleDeserialize(ctx
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateQueueMaxContacts(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdatePredefinedAttribute(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -37806,14 +38633,14 @@ func awsRestjson1_deserializeOpErrorUpdateQueueMaxContacts(response *smithyhttp.
}
}
-type awsRestjson1_deserializeOpUpdateQueueName struct {
+type awsRestjson1_deserializeOpUpdatePrompt struct {
}
-func (*awsRestjson1_deserializeOpUpdateQueueName) ID() string {
+func (*awsRestjson1_deserializeOpUpdatePrompt) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateQueueName) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdatePrompt) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -37831,14 +38658,36 @@ func (m *awsRestjson1_deserializeOpUpdateQueueName) HandleDeserialize(ctx contex
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateQueueName(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdatePrompt(response, &metadata)
}
- output := &UpdateQueueNameOutput{}
+ output := &UpdatePromptOutput{}
out.Result = output
- if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentUpdatePromptOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
return out, metadata, &smithy.DeserializationError{
- Err: fmt.Errorf("failed to discard response body, %w", err),
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
}
}
@@ -37846,7 +38695,7 @@ func (m *awsRestjson1_deserializeOpUpdateQueueName) HandleDeserialize(ctx contex
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateQueueName(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdatePrompt(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -37887,9 +38736,6 @@ func awsRestjson1_deserializeOpErrorUpdateQueueName(response *smithyhttp.Respons
}
switch {
- case strings.EqualFold("DuplicateResourceException", errorCode):
- return awsRestjson1_deserializeErrorDuplicateResourceException(response, errorBody)
-
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
@@ -37915,14 +38761,63 @@ func awsRestjson1_deserializeOpErrorUpdateQueueName(response *smithyhttp.Respons
}
}
-type awsRestjson1_deserializeOpUpdateQueueOutboundCallerConfig struct {
+func awsRestjson1_deserializeOpDocumentUpdatePromptOutput(v **UpdatePromptOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *UpdatePromptOutput
+ if *v == nil {
+ sv = &UpdatePromptOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "PromptARN":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
+ }
+ sv.PromptARN = ptr.String(jtv)
+ }
+
+ case "PromptId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected PromptId to be of type string, got %T instead", value)
+ }
+ sv.PromptId = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
}
-func (*awsRestjson1_deserializeOpUpdateQueueOutboundCallerConfig) ID() string {
+type awsRestjson1_deserializeOpUpdateQueueHoursOfOperation struct {
+}
+
+func (*awsRestjson1_deserializeOpUpdateQueueHoursOfOperation) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateQueueOutboundCallerConfig) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateQueueHoursOfOperation) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -37940,9 +38835,9 @@ func (m *awsRestjson1_deserializeOpUpdateQueueOutboundCallerConfig) HandleDeseri
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateQueueOutboundCallerConfig(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateQueueHoursOfOperation(response, &metadata)
}
- output := &UpdateQueueOutboundCallerConfigOutput{}
+ output := &UpdateQueueHoursOfOperationOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -37955,7 +38850,7 @@ func (m *awsRestjson1_deserializeOpUpdateQueueOutboundCallerConfig) HandleDeseri
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateQueueOutboundCallerConfig(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateQueueHoursOfOperation(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -38021,14 +38916,14 @@ func awsRestjson1_deserializeOpErrorUpdateQueueOutboundCallerConfig(response *sm
}
}
-type awsRestjson1_deserializeOpUpdateQueueOutboundEmailConfig struct {
+type awsRestjson1_deserializeOpUpdateQueueMaxContacts struct {
}
-func (*awsRestjson1_deserializeOpUpdateQueueOutboundEmailConfig) ID() string {
+func (*awsRestjson1_deserializeOpUpdateQueueMaxContacts) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateQueueOutboundEmailConfig) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateQueueMaxContacts) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -38046,9 +38941,9 @@ func (m *awsRestjson1_deserializeOpUpdateQueueOutboundEmailConfig) HandleDeseria
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateQueueOutboundEmailConfig(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateQueueMaxContacts(response, &metadata)
}
- output := &UpdateQueueOutboundEmailConfigOutput{}
+ output := &UpdateQueueMaxContactsOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -38061,7 +38956,7 @@ func (m *awsRestjson1_deserializeOpUpdateQueueOutboundEmailConfig) HandleDeseria
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateQueueOutboundEmailConfig(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateQueueMaxContacts(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -38102,12 +38997,6 @@ func awsRestjson1_deserializeOpErrorUpdateQueueOutboundEmailConfig(response *smi
}
switch {
- case strings.EqualFold("AccessDeniedException", errorCode):
- return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
-
- case strings.EqualFold("ConditionalOperationFailedException", errorCode):
- return awsRestjson1_deserializeErrorConditionalOperationFailedException(response, errorBody)
-
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
@@ -38133,14 +39022,14 @@ func awsRestjson1_deserializeOpErrorUpdateQueueOutboundEmailConfig(response *smi
}
}
-type awsRestjson1_deserializeOpUpdateQueueStatus struct {
+type awsRestjson1_deserializeOpUpdateQueueName struct {
}
-func (*awsRestjson1_deserializeOpUpdateQueueStatus) ID() string {
+func (*awsRestjson1_deserializeOpUpdateQueueName) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateQueueStatus) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateQueueName) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -38158,9 +39047,9 @@ func (m *awsRestjson1_deserializeOpUpdateQueueStatus) HandleDeserialize(ctx cont
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateQueueStatus(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateQueueName(response, &metadata)
}
- output := &UpdateQueueStatusOutput{}
+ output := &UpdateQueueNameOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -38173,7 +39062,7 @@ func (m *awsRestjson1_deserializeOpUpdateQueueStatus) HandleDeserialize(ctx cont
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateQueueStatus(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateQueueName(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -38214,6 +39103,9 @@ func awsRestjson1_deserializeOpErrorUpdateQueueStatus(response *smithyhttp.Respo
}
switch {
+ case strings.EqualFold("DuplicateResourceException", errorCode):
+ return awsRestjson1_deserializeErrorDuplicateResourceException(response, errorBody)
+
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
@@ -38239,14 +39131,14 @@ func awsRestjson1_deserializeOpErrorUpdateQueueStatus(response *smithyhttp.Respo
}
}
-type awsRestjson1_deserializeOpUpdateQuickConnectConfig struct {
+type awsRestjson1_deserializeOpUpdateQueueOutboundCallerConfig struct {
}
-func (*awsRestjson1_deserializeOpUpdateQuickConnectConfig) ID() string {
+func (*awsRestjson1_deserializeOpUpdateQueueOutboundCallerConfig) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateQuickConnectConfig) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateQueueOutboundCallerConfig) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -38264,9 +39156,9 @@ func (m *awsRestjson1_deserializeOpUpdateQuickConnectConfig) HandleDeserialize(c
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateQuickConnectConfig(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateQueueOutboundCallerConfig(response, &metadata)
}
- output := &UpdateQuickConnectConfigOutput{}
+ output := &UpdateQueueOutboundCallerConfigOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -38279,7 +39171,7 @@ func (m *awsRestjson1_deserializeOpUpdateQuickConnectConfig) HandleDeserialize(c
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateQuickConnectConfig(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateQueueOutboundCallerConfig(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -38345,14 +39237,14 @@ func awsRestjson1_deserializeOpErrorUpdateQuickConnectConfig(response *smithyhtt
}
}
-type awsRestjson1_deserializeOpUpdateQuickConnectName struct {
+type awsRestjson1_deserializeOpUpdateQueueOutboundEmailConfig struct {
}
-func (*awsRestjson1_deserializeOpUpdateQuickConnectName) ID() string {
+func (*awsRestjson1_deserializeOpUpdateQueueOutboundEmailConfig) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateQuickConnectName) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateQueueOutboundEmailConfig) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -38370,9 +39262,9 @@ func (m *awsRestjson1_deserializeOpUpdateQuickConnectName) HandleDeserialize(ctx
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateQuickConnectName(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateQueueOutboundEmailConfig(response, &metadata)
}
- output := &UpdateQuickConnectNameOutput{}
+ output := &UpdateQueueOutboundEmailConfigOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -38385,7 +39277,7 @@ func (m *awsRestjson1_deserializeOpUpdateQuickConnectName) HandleDeserialize(ctx
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateQuickConnectName(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateQueueOutboundEmailConfig(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -38426,6 +39318,12 @@ func awsRestjson1_deserializeOpErrorUpdateQuickConnectName(response *smithyhttp.
}
switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("ConditionalOperationFailedException", errorCode):
+ return awsRestjson1_deserializeErrorConditionalOperationFailedException(response, errorBody)
+
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
@@ -38451,14 +39349,14 @@ func awsRestjson1_deserializeOpErrorUpdateQuickConnectName(response *smithyhttp.
}
}
-type awsRestjson1_deserializeOpUpdateRoutingProfileAgentAvailabilityTimer struct {
+type awsRestjson1_deserializeOpUpdateQueueStatus struct {
}
-func (*awsRestjson1_deserializeOpUpdateRoutingProfileAgentAvailabilityTimer) ID() string {
+func (*awsRestjson1_deserializeOpUpdateQueueStatus) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateRoutingProfileAgentAvailabilityTimer) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateQueueStatus) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -38476,9 +39374,9 @@ func (m *awsRestjson1_deserializeOpUpdateRoutingProfileAgentAvailabilityTimer) H
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateRoutingProfileAgentAvailabilityTimer(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateQueueStatus(response, &metadata)
}
- output := &UpdateRoutingProfileAgentAvailabilityTimerOutput{}
+ output := &UpdateQueueStatusOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -38491,7 +39389,7 @@ func (m *awsRestjson1_deserializeOpUpdateRoutingProfileAgentAvailabilityTimer) H
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateRoutingProfileAgentAvailabilityTimer(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateQueueStatus(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -38557,14 +39455,14 @@ func awsRestjson1_deserializeOpErrorUpdateRoutingProfileAgentAvailabilityTimer(r
}
}
-type awsRestjson1_deserializeOpUpdateRoutingProfileConcurrency struct {
+type awsRestjson1_deserializeOpUpdateQuickConnectConfig struct {
}
-func (*awsRestjson1_deserializeOpUpdateRoutingProfileConcurrency) ID() string {
+func (*awsRestjson1_deserializeOpUpdateQuickConnectConfig) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateRoutingProfileConcurrency) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateQuickConnectConfig) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -38582,9 +39480,9 @@ func (m *awsRestjson1_deserializeOpUpdateRoutingProfileConcurrency) HandleDeseri
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateRoutingProfileConcurrency(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateQuickConnectConfig(response, &metadata)
}
- output := &UpdateRoutingProfileConcurrencyOutput{}
+ output := &UpdateQuickConnectConfigOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -38597,7 +39495,7 @@ func (m *awsRestjson1_deserializeOpUpdateRoutingProfileConcurrency) HandleDeseri
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateRoutingProfileConcurrency(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateQuickConnectConfig(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -38663,14 +39561,14 @@ func awsRestjson1_deserializeOpErrorUpdateRoutingProfileConcurrency(response *sm
}
}
-type awsRestjson1_deserializeOpUpdateRoutingProfileDefaultOutboundQueue struct {
+type awsRestjson1_deserializeOpUpdateQuickConnectName struct {
}
-func (*awsRestjson1_deserializeOpUpdateRoutingProfileDefaultOutboundQueue) ID() string {
+func (*awsRestjson1_deserializeOpUpdateQuickConnectName) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateRoutingProfileDefaultOutboundQueue) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateQuickConnectName) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -38688,9 +39586,9 @@ func (m *awsRestjson1_deserializeOpUpdateRoutingProfileDefaultOutboundQueue) Han
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateRoutingProfileDefaultOutboundQueue(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateQuickConnectName(response, &metadata)
}
- output := &UpdateRoutingProfileDefaultOutboundQueueOutput{}
+ output := &UpdateQuickConnectNameOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -38703,7 +39601,7 @@ func (m *awsRestjson1_deserializeOpUpdateRoutingProfileDefaultOutboundQueue) Han
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateRoutingProfileDefaultOutboundQueue(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateQuickConnectName(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -38769,14 +39667,14 @@ func awsRestjson1_deserializeOpErrorUpdateRoutingProfileDefaultOutboundQueue(res
}
}
-type awsRestjson1_deserializeOpUpdateRoutingProfileName struct {
+type awsRestjson1_deserializeOpUpdateRoutingProfileAgentAvailabilityTimer struct {
}
-func (*awsRestjson1_deserializeOpUpdateRoutingProfileName) ID() string {
+func (*awsRestjson1_deserializeOpUpdateRoutingProfileAgentAvailabilityTimer) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateRoutingProfileName) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateRoutingProfileAgentAvailabilityTimer) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -38794,9 +39692,9 @@ func (m *awsRestjson1_deserializeOpUpdateRoutingProfileName) HandleDeserialize(c
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateRoutingProfileName(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateRoutingProfileAgentAvailabilityTimer(response, &metadata)
}
- output := &UpdateRoutingProfileNameOutput{}
+ output := &UpdateRoutingProfileAgentAvailabilityTimerOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -38809,7 +39707,7 @@ func (m *awsRestjson1_deserializeOpUpdateRoutingProfileName) HandleDeserialize(c
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateRoutingProfileName(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateRoutingProfileAgentAvailabilityTimer(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -38850,9 +39748,6 @@ func awsRestjson1_deserializeOpErrorUpdateRoutingProfileName(response *smithyhtt
}
switch {
- case strings.EqualFold("DuplicateResourceException", errorCode):
- return awsRestjson1_deserializeErrorDuplicateResourceException(response, errorBody)
-
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
@@ -38878,14 +39773,14 @@ func awsRestjson1_deserializeOpErrorUpdateRoutingProfileName(response *smithyhtt
}
}
-type awsRestjson1_deserializeOpUpdateRoutingProfileQueues struct {
+type awsRestjson1_deserializeOpUpdateRoutingProfileConcurrency struct {
}
-func (*awsRestjson1_deserializeOpUpdateRoutingProfileQueues) ID() string {
+func (*awsRestjson1_deserializeOpUpdateRoutingProfileConcurrency) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateRoutingProfileQueues) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateRoutingProfileConcurrency) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -38903,9 +39798,9 @@ func (m *awsRestjson1_deserializeOpUpdateRoutingProfileQueues) HandleDeserialize
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateRoutingProfileQueues(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateRoutingProfileConcurrency(response, &metadata)
}
- output := &UpdateRoutingProfileQueuesOutput{}
+ output := &UpdateRoutingProfileConcurrencyOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -38918,7 +39813,7 @@ func (m *awsRestjson1_deserializeOpUpdateRoutingProfileQueues) HandleDeserialize
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateRoutingProfileQueues(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateRoutingProfileConcurrency(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -38984,14 +39879,14 @@ func awsRestjson1_deserializeOpErrorUpdateRoutingProfileQueues(response *smithyh
}
}
-type awsRestjson1_deserializeOpUpdateRule struct {
+type awsRestjson1_deserializeOpUpdateRoutingProfileDefaultOutboundQueue struct {
}
-func (*awsRestjson1_deserializeOpUpdateRule) ID() string {
+func (*awsRestjson1_deserializeOpUpdateRoutingProfileDefaultOutboundQueue) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateRule) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateRoutingProfileDefaultOutboundQueue) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -39009,9 +39904,9 @@ func (m *awsRestjson1_deserializeOpUpdateRule) HandleDeserialize(ctx context.Con
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateRule(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateRoutingProfileDefaultOutboundQueue(response, &metadata)
}
- output := &UpdateRuleOutput{}
+ output := &UpdateRoutingProfileDefaultOutboundQueueOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -39024,7 +39919,7 @@ func (m *awsRestjson1_deserializeOpUpdateRule) HandleDeserialize(ctx context.Con
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateRule(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateRoutingProfileDefaultOutboundQueue(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -39065,18 +39960,15 @@ func awsRestjson1_deserializeOpErrorUpdateRule(response *smithyhttp.Response, me
}
switch {
- case strings.EqualFold("AccessDeniedException", errorCode):
- return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
-
case strings.EqualFold("InternalServiceException", errorCode):
return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
+ case strings.EqualFold("InvalidParameterException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidParameterException(response, errorBody)
+
case strings.EqualFold("InvalidRequestException", errorCode):
return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
- case strings.EqualFold("ResourceConflictException", errorCode):
- return awsRestjson1_deserializeErrorResourceConflictException(response, errorBody)
-
case strings.EqualFold("ResourceNotFoundException", errorCode):
return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
@@ -39093,14 +39985,14 @@ func awsRestjson1_deserializeOpErrorUpdateRule(response *smithyhttp.Response, me
}
}
-type awsRestjson1_deserializeOpUpdateSecurityProfile struct {
+type awsRestjson1_deserializeOpUpdateRoutingProfileName struct {
}
-func (*awsRestjson1_deserializeOpUpdateSecurityProfile) ID() string {
+func (*awsRestjson1_deserializeOpUpdateRoutingProfileName) ID() string {
return "OperationDeserializer"
}
-func (m *awsRestjson1_deserializeOpUpdateSecurityProfile) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsRestjson1_deserializeOpUpdateRoutingProfileName) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -39118,9 +40010,9 @@ func (m *awsRestjson1_deserializeOpUpdateSecurityProfile) HandleDeserialize(ctx
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsRestjson1_deserializeOpErrorUpdateSecurityProfile(response, &metadata)
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateRoutingProfileName(response, &metadata)
}
- output := &UpdateSecurityProfileOutput{}
+ output := &UpdateRoutingProfileNameOutput{}
out.Result = output
if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
@@ -39133,7 +40025,331 @@ func (m *awsRestjson1_deserializeOpUpdateSecurityProfile) HandleDeserialize(ctx
return out, metadata, err
}
-func awsRestjson1_deserializeOpErrorUpdateSecurityProfile(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsRestjson1_deserializeOpErrorUpdateRoutingProfileName(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("DuplicateResourceException", errorCode):
+ return awsRestjson1_deserializeErrorDuplicateResourceException(response, errorBody)
+
+ case strings.EqualFold("InternalServiceException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
+
+ case strings.EqualFold("InvalidParameterException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidParameterException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+type awsRestjson1_deserializeOpUpdateRoutingProfileQueues struct {
+}
+
+func (*awsRestjson1_deserializeOpUpdateRoutingProfileQueues) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpUpdateRoutingProfileQueues) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateRoutingProfileQueues(response, &metadata)
+ }
+ output := &UpdateRoutingProfileQueuesOutput{}
+ out.Result = output
+
+ if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to discard response body, %w", err),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorUpdateRoutingProfileQueues(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("InternalServiceException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
+
+ case strings.EqualFold("InvalidParameterException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidParameterException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+type awsRestjson1_deserializeOpUpdateRule struct {
+}
+
+func (*awsRestjson1_deserializeOpUpdateRule) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpUpdateRule) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateRule(response, &metadata)
+ }
+ output := &UpdateRuleOutput{}
+ out.Result = output
+
+ if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to discard response body, %w", err),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorUpdateRule(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("InternalServiceException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServiceException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+ case strings.EqualFold("ResourceConflictException", errorCode):
+ return awsRestjson1_deserializeErrorResourceConflictException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+type awsRestjson1_deserializeOpUpdateSecurityProfile struct {
+}
+
+func (*awsRestjson1_deserializeOpUpdateSecurityProfile) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpUpdateSecurityProfile) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorUpdateSecurityProfile(response, &metadata)
+ }
+ output := &UpdateSecurityProfileOutput{}
+ out.Result = output
+
+ if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to discard response body, %w", err),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorUpdateSecurityProfile(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -44523,6 +45739,15 @@ func awsRestjson1_deserializeDocumentContact(v **types.Contact, value interface{
return err
}
+ case "CustomerId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected CustomerId to be of type string, got %T instead", value)
+ }
+ sv.CustomerId = ptr.String(jtv)
+ }
+
case "CustomerVoiceActivity":
if err := awsRestjson1_deserializeDocumentCustomerVoiceActivity(&sv.CustomerVoiceActivity, value); err != nil {
return err
@@ -47138,6 +48363,85 @@ func awsRestjson1_deserializeDocumentDuplicateResourceException(v **types.Duplic
return nil
}
+func awsRestjson1_deserializeDocumentEffectiveHoursOfOperationList(v *[]types.EffectiveHoursOfOperations, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.EffectiveHoursOfOperations
+ if *v == nil {
+ cv = []types.EffectiveHoursOfOperations{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.EffectiveHoursOfOperations
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentEffectiveHoursOfOperations(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentEffectiveHoursOfOperations(v **types.EffectiveHoursOfOperations, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.EffectiveHoursOfOperations
+ if *v == nil {
+ sv = &types.EffectiveHoursOfOperations{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "Date":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected HoursOfOperationOverrideYearMonthDayDateFormat to be of type string, got %T instead", value)
+ }
+ sv.Date = ptr.String(jtv)
+ }
+
+ case "OperationalHours":
+ if err := awsRestjson1_deserializeDocumentOperationalHours(&sv.OperationalHours, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsRestjson1_deserializeDocumentEmailAddressList(v *[]types.EmailAddressMetadata, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -51519,6 +52823,223 @@ func awsRestjson1_deserializeDocumentHoursOfOperationList(v *[]types.HoursOfOper
return nil
}
+func awsRestjson1_deserializeDocumentHoursOfOperationOverride(v **types.HoursOfOperationOverride, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.HoursOfOperationOverride
+ if *v == nil {
+ sv = &types.HoursOfOperationOverride{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "Config":
+ if err := awsRestjson1_deserializeDocumentHoursOfOperationOverrideConfigList(&sv.Config, value); err != nil {
+ return err
+ }
+
+ case "Description":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected CommonHumanReadableDescription to be of type string, got %T instead", value)
+ }
+ sv.Description = ptr.String(jtv)
+ }
+
+ case "EffectiveFrom":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected HoursOfOperationOverrideYearMonthDayDateFormat to be of type string, got %T instead", value)
+ }
+ sv.EffectiveFrom = ptr.String(jtv)
+ }
+
+ case "EffectiveTill":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected HoursOfOperationOverrideYearMonthDayDateFormat to be of type string, got %T instead", value)
+ }
+ sv.EffectiveTill = ptr.String(jtv)
+ }
+
+ case "HoursOfOperationArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ARN to be of type string, got %T instead", value)
+ }
+ sv.HoursOfOperationArn = ptr.String(jtv)
+ }
+
+ case "HoursOfOperationId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected HoursOfOperationId to be of type string, got %T instead", value)
+ }
+ sv.HoursOfOperationId = ptr.String(jtv)
+ }
+
+ case "HoursOfOperationOverrideId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected HoursOfOperationOverrideId to be of type string, got %T instead", value)
+ }
+ sv.HoursOfOperationOverrideId = ptr.String(jtv)
+ }
+
+ case "Name":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected CommonHumanReadableName to be of type string, got %T instead", value)
+ }
+ sv.Name = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentHoursOfOperationOverrideConfig(v **types.HoursOfOperationOverrideConfig, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.HoursOfOperationOverrideConfig
+ if *v == nil {
+ sv = &types.HoursOfOperationOverrideConfig{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "Day":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected OverrideDays to be of type string, got %T instead", value)
+ }
+ sv.Day = types.OverrideDays(jtv)
+ }
+
+ case "EndTime":
+ if err := awsRestjson1_deserializeDocumentOverrideTimeSlice(&sv.EndTime, value); err != nil {
+ return err
+ }
+
+ case "StartTime":
+ if err := awsRestjson1_deserializeDocumentOverrideTimeSlice(&sv.StartTime, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentHoursOfOperationOverrideConfigList(v *[]types.HoursOfOperationOverrideConfig, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.HoursOfOperationOverrideConfig
+ if *v == nil {
+ cv = []types.HoursOfOperationOverrideConfig{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.HoursOfOperationOverrideConfig
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentHoursOfOperationOverrideConfig(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentHoursOfOperationOverrideList(v *[]types.HoursOfOperationOverride, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.HoursOfOperationOverride
+ if *v == nil {
+ cv = []types.HoursOfOperationOverride{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.HoursOfOperationOverride
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentHoursOfOperationOverride(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
func awsRestjson1_deserializeDocumentHoursOfOperationSummary(v **types.HoursOfOperationSummary, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -54079,6 +55600,81 @@ func awsRestjson1_deserializeDocumentNumericQuestionPropertyValueAutomation(v **
return nil
}
+func awsRestjson1_deserializeDocumentOperationalHour(v **types.OperationalHour, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.OperationalHour
+ if *v == nil {
+ sv = &types.OperationalHour{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "End":
+ if err := awsRestjson1_deserializeDocumentOverrideTimeSlice(&sv.End, value); err != nil {
+ return err
+ }
+
+ case "Start":
+ if err := awsRestjson1_deserializeDocumentOverrideTimeSlice(&sv.Start, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentOperationalHours(v *[]types.OperationalHour, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.OperationalHour
+ if *v == nil {
+ cv = []types.OperationalHour{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.OperationalHour
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentOperationalHour(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
func awsRestjson1_deserializeDocumentOriginsList(v *[]string, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -54293,6 +55889,63 @@ func awsRestjson1_deserializeDocumentOutputTypeNotFoundException(v **types.Outpu
return nil
}
+func awsRestjson1_deserializeDocumentOverrideTimeSlice(v **types.OverrideTimeSlice, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.OverrideTimeSlice
+ if *v == nil {
+ sv = &types.OverrideTimeSlice{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "Hours":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected Hours24Format to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.Hours = ptr.Int32(int32(i64))
+ }
+
+ case "Minutes":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected MinutesLimit60 to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.Minutes = ptr.Int32(int32(i64))
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsRestjson1_deserializeDocumentParticipantCapabilities(v **types.ParticipantCapabilities, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
diff --git a/service/connect/generated.json b/service/connect/generated.json
index f8fd5c4ffde..37e0e63b369 100644
--- a/service/connect/generated.json
+++ b/service/connect/generated.json
@@ -38,6 +38,7 @@
"api_op_CreateEmailAddress.go",
"api_op_CreateEvaluationForm.go",
"api_op_CreateHoursOfOperation.go",
+ "api_op_CreateHoursOfOperationOverride.go",
"api_op_CreateInstance.go",
"api_op_CreateIntegrationAssociation.go",
"api_op_CreateParticipant.go",
@@ -66,6 +67,7 @@
"api_op_DeleteEmailAddress.go",
"api_op_DeleteEvaluationForm.go",
"api_op_DeleteHoursOfOperation.go",
+ "api_op_DeleteHoursOfOperationOverride.go",
"api_op_DeleteInstance.go",
"api_op_DeleteIntegrationAssociation.go",
"api_op_DeletePredefinedAttribute.go",
@@ -93,6 +95,7 @@
"api_op_DescribeEmailAddress.go",
"api_op_DescribeEvaluationForm.go",
"api_op_DescribeHoursOfOperation.go",
+ "api_op_DescribeHoursOfOperationOverride.go",
"api_op_DescribeInstance.go",
"api_op_DescribeInstanceAttribute.go",
"api_op_DescribeInstanceStorageConfig.go",
@@ -128,6 +131,7 @@
"api_op_GetContactAttributes.go",
"api_op_GetCurrentMetricData.go",
"api_op_GetCurrentUserData.go",
+ "api_op_GetEffectiveHoursOfOperations.go",
"api_op_GetFederationToken.go",
"api_op_GetFlowAssociation.go",
"api_op_GetMetricData.go",
@@ -151,6 +155,7 @@
"api_op_ListEvaluationFormVersions.go",
"api_op_ListEvaluationForms.go",
"api_op_ListFlowAssociations.go",
+ "api_op_ListHoursOfOperationOverrides.go",
"api_op_ListHoursOfOperations.go",
"api_op_ListInstanceAttributes.go",
"api_op_ListInstanceStorageConfigs.go",
@@ -196,6 +201,7 @@
"api_op_SearchContactFlows.go",
"api_op_SearchContacts.go",
"api_op_SearchEmailAddresses.go",
+ "api_op_SearchHoursOfOperationOverrides.go",
"api_op_SearchHoursOfOperations.go",
"api_op_SearchPredefinedAttributes.go",
"api_op_SearchPrompts.go",
@@ -246,8 +252,10 @@
"api_op_UpdateEmailAddressMetadata.go",
"api_op_UpdateEvaluationForm.go",
"api_op_UpdateHoursOfOperation.go",
+ "api_op_UpdateHoursOfOperationOverride.go",
"api_op_UpdateInstanceAttribute.go",
"api_op_UpdateInstanceStorageConfig.go",
+ "api_op_UpdateParticipantAuthentication.go",
"api_op_UpdateParticipantRoleConfig.go",
"api_op_UpdatePhoneNumber.go",
"api_op_UpdatePhoneNumberMetadata.go",
diff --git a/service/connect/go_module_metadata.go b/service/connect/go_module_metadata.go
index 3cfb81a4ce6..0901824fd6a 100644
--- a/service/connect/go_module_metadata.go
+++ b/service/connect/go_module_metadata.go
@@ -3,4 +3,4 @@
package connect
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.120.0"
+const goModuleVersion = "1.122.0"
diff --git a/service/connect/serializers.go b/service/connect/serializers.go
index ebffa71fc5c..62dda4559a7 100644
--- a/service/connect/serializers.go
+++ b/service/connect/serializers.go
@@ -3189,6 +3189,131 @@ func awsRestjson1_serializeOpDocumentCreateHoursOfOperationInput(v *CreateHoursO
return nil
}
+type awsRestjson1_serializeOpCreateHoursOfOperationOverride struct {
+}
+
+func (*awsRestjson1_serializeOpCreateHoursOfOperationOverride) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpCreateHoursOfOperationOverride) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*CreateHoursOfOperationOverrideInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/hours-of-operations/{InstanceId}/{HoursOfOperationId}/overrides")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "PUT"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsCreateHoursOfOperationOverrideInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ restEncoder.SetHeader("Content-Type").String("application/json")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsRestjson1_serializeOpDocumentCreateHoursOfOperationOverrideInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsCreateHoursOfOperationOverrideInput(v *CreateHoursOfOperationOverrideInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.HoursOfOperationId == nil || len(*v.HoursOfOperationId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member HoursOfOperationId must not be empty")}
+ }
+ if v.HoursOfOperationId != nil {
+ if err := encoder.SetURI("HoursOfOperationId").String(*v.HoursOfOperationId); err != nil {
+ return err
+ }
+ }
+
+ if v.InstanceId == nil || len(*v.InstanceId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member InstanceId must not be empty")}
+ }
+ if v.InstanceId != nil {
+ if err := encoder.SetURI("InstanceId").String(*v.InstanceId); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeOpDocumentCreateHoursOfOperationOverrideInput(v *CreateHoursOfOperationOverrideInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.Config != nil {
+ ok := object.Key("Config")
+ if err := awsRestjson1_serializeDocumentHoursOfOperationOverrideConfigList(v.Config, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.Description != nil {
+ ok := object.Key("Description")
+ ok.String(*v.Description)
+ }
+
+ if v.EffectiveFrom != nil {
+ ok := object.Key("EffectiveFrom")
+ ok.String(*v.EffectiveFrom)
+ }
+
+ if v.EffectiveTill != nil {
+ ok := object.Key("EffectiveTill")
+ ok.String(*v.EffectiveTill)
+ }
+
+ if v.Name != nil {
+ ok := object.Key("Name")
+ ok.String(*v.Name)
+ }
+
+ return nil
+}
+
type awsRestjson1_serializeOpCreateInstance struct {
}
@@ -6230,6 +6355,95 @@ func awsRestjson1_serializeOpHttpBindingsDeleteHoursOfOperationInput(v *DeleteHo
return nil
}
+type awsRestjson1_serializeOpDeleteHoursOfOperationOverride struct {
+}
+
+func (*awsRestjson1_serializeOpDeleteHoursOfOperationOverride) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpDeleteHoursOfOperationOverride) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*DeleteHoursOfOperationOverrideInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/hours-of-operations/{InstanceId}/{HoursOfOperationId}/overrides/{HoursOfOperationOverrideId}")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "DELETE"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsDeleteHoursOfOperationOverrideInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsDeleteHoursOfOperationOverrideInput(v *DeleteHoursOfOperationOverrideInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.HoursOfOperationId == nil || len(*v.HoursOfOperationId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member HoursOfOperationId must not be empty")}
+ }
+ if v.HoursOfOperationId != nil {
+ if err := encoder.SetURI("HoursOfOperationId").String(*v.HoursOfOperationId); err != nil {
+ return err
+ }
+ }
+
+ if v.HoursOfOperationOverrideId == nil || len(*v.HoursOfOperationOverrideId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member HoursOfOperationOverrideId must not be empty")}
+ }
+ if v.HoursOfOperationOverrideId != nil {
+ if err := encoder.SetURI("HoursOfOperationOverrideId").String(*v.HoursOfOperationOverrideId); err != nil {
+ return err
+ }
+ }
+
+ if v.InstanceId == nil || len(*v.InstanceId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member InstanceId must not be empty")}
+ }
+ if v.InstanceId != nil {
+ if err := encoder.SetURI("InstanceId").String(*v.InstanceId); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
type awsRestjson1_serializeOpDeleteInstance struct {
}
@@ -8398,6 +8612,95 @@ func awsRestjson1_serializeOpHttpBindingsDescribeHoursOfOperationInput(v *Descri
return nil
}
+type awsRestjson1_serializeOpDescribeHoursOfOperationOverride struct {
+}
+
+func (*awsRestjson1_serializeOpDescribeHoursOfOperationOverride) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpDescribeHoursOfOperationOverride) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*DescribeHoursOfOperationOverrideInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/hours-of-operations/{InstanceId}/{HoursOfOperationId}/overrides/{HoursOfOperationOverrideId}")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "GET"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsDescribeHoursOfOperationOverrideInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsDescribeHoursOfOperationOverrideInput(v *DescribeHoursOfOperationOverrideInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.HoursOfOperationId == nil || len(*v.HoursOfOperationId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member HoursOfOperationId must not be empty")}
+ }
+ if v.HoursOfOperationId != nil {
+ if err := encoder.SetURI("HoursOfOperationId").String(*v.HoursOfOperationId); err != nil {
+ return err
+ }
+ }
+
+ if v.HoursOfOperationOverrideId == nil || len(*v.HoursOfOperationOverrideId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member HoursOfOperationOverrideId must not be empty")}
+ }
+ if v.HoursOfOperationOverrideId != nil {
+ if err := encoder.SetURI("HoursOfOperationOverrideId").String(*v.HoursOfOperationOverrideId); err != nil {
+ return err
+ }
+ }
+
+ if v.InstanceId == nil || len(*v.InstanceId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member InstanceId must not be empty")}
+ }
+ if v.InstanceId != nil {
+ if err := encoder.SetURI("InstanceId").String(*v.InstanceId); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
type awsRestjson1_serializeOpDescribeInstance struct {
}
@@ -11383,6 +11686,94 @@ func awsRestjson1_serializeOpDocumentGetCurrentUserDataInput(v *GetCurrentUserDa
return nil
}
+type awsRestjson1_serializeOpGetEffectiveHoursOfOperations struct {
+}
+
+func (*awsRestjson1_serializeOpGetEffectiveHoursOfOperations) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpGetEffectiveHoursOfOperations) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*GetEffectiveHoursOfOperationsInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/effective-hours-of-operations/{InstanceId}/{HoursOfOperationId}")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "GET"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsGetEffectiveHoursOfOperationsInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsGetEffectiveHoursOfOperationsInput(v *GetEffectiveHoursOfOperationsInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.FromDate != nil {
+ encoder.SetQuery("fromDate").String(*v.FromDate)
+ }
+
+ if v.HoursOfOperationId == nil || len(*v.HoursOfOperationId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member HoursOfOperationId must not be empty")}
+ }
+ if v.HoursOfOperationId != nil {
+ if err := encoder.SetURI("HoursOfOperationId").String(*v.HoursOfOperationId); err != nil {
+ return err
+ }
+ }
+
+ if v.InstanceId == nil || len(*v.InstanceId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member InstanceId must not be empty")}
+ }
+ if v.InstanceId != nil {
+ if err := encoder.SetURI("InstanceId").String(*v.InstanceId); err != nil {
+ return err
+ }
+ }
+
+ if v.ToDate != nil {
+ encoder.SetQuery("toDate").String(*v.ToDate)
+ }
+
+ return nil
+}
+
type awsRestjson1_serializeOpGetFederationToken struct {
}
@@ -13328,14 +13719,97 @@ func awsRestjson1_serializeOpHttpBindingsListEvaluationFormVersionsInput(v *List
return nil
}
-type awsRestjson1_serializeOpListFlowAssociations struct {
+type awsRestjson1_serializeOpListFlowAssociations struct {
+}
+
+func (*awsRestjson1_serializeOpListFlowAssociations) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpListFlowAssociations) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*ListFlowAssociationsInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/flow-associations-summary/{InstanceId}")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "GET"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsListFlowAssociationsInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsListFlowAssociationsInput(v *ListFlowAssociationsInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.InstanceId == nil || len(*v.InstanceId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member InstanceId must not be empty")}
+ }
+ if v.InstanceId != nil {
+ if err := encoder.SetURI("InstanceId").String(*v.InstanceId); err != nil {
+ return err
+ }
+ }
+
+ if v.MaxResults != nil {
+ encoder.SetQuery("maxResults").Integer(*v.MaxResults)
+ }
+
+ if v.NextToken != nil {
+ encoder.SetQuery("nextToken").String(*v.NextToken)
+ }
+
+ if len(v.ResourceType) > 0 {
+ encoder.SetQuery("ResourceType").String(string(v.ResourceType))
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpListHoursOfOperationOverrides struct {
}
-func (*awsRestjson1_serializeOpListFlowAssociations) ID() string {
+func (*awsRestjson1_serializeOpListHoursOfOperationOverrides) ID() string {
return "OperationSerializer"
}
-func (m *awsRestjson1_serializeOpListFlowAssociations) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+func (m *awsRestjson1_serializeOpListHoursOfOperationOverrides) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
out middleware.SerializeOutput, metadata middleware.Metadata, err error,
) {
_, span := tracing.StartSpan(ctx, "OperationSerializer")
@@ -13347,13 +13821,13 @@ func (m *awsRestjson1_serializeOpListFlowAssociations) HandleSerialize(ctx conte
return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
}
- input, ok := in.Parameters.(*ListFlowAssociationsInput)
+ input, ok := in.Parameters.(*ListHoursOfOperationOverridesInput)
_ = input
if !ok {
return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
}
- opPath, opQuery := httpbinding.SplitURI("/flow-associations-summary/{InstanceId}")
+ opPath, opQuery := httpbinding.SplitURI("/hours-of-operations/{InstanceId}/{HoursOfOperationId}/overrides")
request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
request.Method = "GET"
@@ -13369,7 +13843,7 @@ func (m *awsRestjson1_serializeOpListFlowAssociations) HandleSerialize(ctx conte
return out, metadata, &smithy.SerializationError{Err: err}
}
- if err := awsRestjson1_serializeOpHttpBindingsListFlowAssociationsInput(input, restEncoder); err != nil {
+ if err := awsRestjson1_serializeOpHttpBindingsListHoursOfOperationOverridesInput(input, restEncoder); err != nil {
return out, metadata, &smithy.SerializationError{Err: err}
}
@@ -13382,11 +13856,20 @@ func (m *awsRestjson1_serializeOpListFlowAssociations) HandleSerialize(ctx conte
span.End()
return next.HandleSerialize(ctx, in)
}
-func awsRestjson1_serializeOpHttpBindingsListFlowAssociationsInput(v *ListFlowAssociationsInput, encoder *httpbinding.Encoder) error {
+func awsRestjson1_serializeOpHttpBindingsListHoursOfOperationOverridesInput(v *ListHoursOfOperationOverridesInput, encoder *httpbinding.Encoder) error {
if v == nil {
return fmt.Errorf("unsupported serialization of nil %T", v)
}
+ if v.HoursOfOperationId == nil || len(*v.HoursOfOperationId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member HoursOfOperationId must not be empty")}
+ }
+ if v.HoursOfOperationId != nil {
+ if err := encoder.SetURI("HoursOfOperationId").String(*v.HoursOfOperationId); err != nil {
+ return err
+ }
+ }
+
if v.InstanceId == nil || len(*v.InstanceId) == 0 {
return &smithy.SerializationError{Err: fmt.Errorf("input member InstanceId must not be empty")}
}
@@ -13404,10 +13887,6 @@ func awsRestjson1_serializeOpHttpBindingsListFlowAssociationsInput(v *ListFlowAs
encoder.SetQuery("nextToken").String(*v.NextToken)
}
- if len(v.ResourceType) > 0 {
- encoder.SetQuery("ResourceType").String(string(v.ResourceType))
- }
-
return nil
}
@@ -17419,6 +17898,111 @@ func awsRestjson1_serializeOpDocumentSearchEmailAddressesInput(v *SearchEmailAdd
return nil
}
+type awsRestjson1_serializeOpSearchHoursOfOperationOverrides struct {
+}
+
+func (*awsRestjson1_serializeOpSearchHoursOfOperationOverrides) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpSearchHoursOfOperationOverrides) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*SearchHoursOfOperationOverridesInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/search-hours-of-operation-overrides")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "POST"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ restEncoder.SetHeader("Content-Type").String("application/json")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsRestjson1_serializeOpDocumentSearchHoursOfOperationOverridesInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsSearchHoursOfOperationOverridesInput(v *SearchHoursOfOperationOverridesInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeOpDocumentSearchHoursOfOperationOverridesInput(v *SearchHoursOfOperationOverridesInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.InstanceId != nil {
+ ok := object.Key("InstanceId")
+ ok.String(*v.InstanceId)
+ }
+
+ if v.MaxResults != nil {
+ ok := object.Key("MaxResults")
+ ok.Integer(*v.MaxResults)
+ }
+
+ if v.NextToken != nil {
+ ok := object.Key("NextToken")
+ ok.String(*v.NextToken)
+ }
+
+ if v.SearchCriteria != nil {
+ ok := object.Key("SearchCriteria")
+ if err := awsRestjson1_serializeDocumentHoursOfOperationOverrideSearchCriteria(v.SearchCriteria, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.SearchFilter != nil {
+ ok := object.Key("SearchFilter")
+ if err := awsRestjson1_serializeDocumentHoursOfOperationSearchFilter(v.SearchFilter, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
type awsRestjson1_serializeOpSearchHoursOfOperations struct {
}
@@ -19042,6 +19626,11 @@ func awsRestjson1_serializeOpDocumentStartChatContactInput(v *StartChatContactIn
ok.String(*v.ContactFlowId)
}
+ if v.CustomerId != nil {
+ ok := object.Key("CustomerId")
+ ok.String(*v.CustomerId)
+ }
+
if v.InitialMessage != nil {
ok := object.Key("InitialMessage")
if err := awsRestjson1_serializeDocumentChatMessage(v.InitialMessage, ok); err != nil {
@@ -22858,41 +23447,161 @@ func awsRestjson1_serializeOpDocumentUpdateEvaluationFormInput(v *UpdateEvaluati
ok.String(*v.Description)
}
- {
- ok := object.Key("EvaluationFormVersion")
- ok.Integer(v.EvaluationFormVersion)
- }
-
- if v.Items != nil {
- ok := object.Key("Items")
- if err := awsRestjson1_serializeDocumentEvaluationFormItemsList(v.Items, ok); err != nil {
- return err
- }
- }
-
- if v.ScoringStrategy != nil {
- ok := object.Key("ScoringStrategy")
- if err := awsRestjson1_serializeDocumentEvaluationFormScoringStrategy(v.ScoringStrategy, ok); err != nil {
- return err
- }
+ {
+ ok := object.Key("EvaluationFormVersion")
+ ok.Integer(v.EvaluationFormVersion)
+ }
+
+ if v.Items != nil {
+ ok := object.Key("Items")
+ if err := awsRestjson1_serializeDocumentEvaluationFormItemsList(v.Items, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.ScoringStrategy != nil {
+ ok := object.Key("ScoringStrategy")
+ if err := awsRestjson1_serializeDocumentEvaluationFormScoringStrategy(v.ScoringStrategy, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.Title != nil {
+ ok := object.Key("Title")
+ ok.String(*v.Title)
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpUpdateHoursOfOperation struct {
+}
+
+func (*awsRestjson1_serializeOpUpdateHoursOfOperation) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpUpdateHoursOfOperation) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*UpdateHoursOfOperationInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/hours-of-operations/{InstanceId}/{HoursOfOperationId}")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "POST"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsUpdateHoursOfOperationInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ restEncoder.SetHeader("Content-Type").String("application/json")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsRestjson1_serializeOpDocumentUpdateHoursOfOperationInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsUpdateHoursOfOperationInput(v *UpdateHoursOfOperationInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.HoursOfOperationId == nil || len(*v.HoursOfOperationId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member HoursOfOperationId must not be empty")}
+ }
+ if v.HoursOfOperationId != nil {
+ if err := encoder.SetURI("HoursOfOperationId").String(*v.HoursOfOperationId); err != nil {
+ return err
+ }
+ }
+
+ if v.InstanceId == nil || len(*v.InstanceId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member InstanceId must not be empty")}
+ }
+ if v.InstanceId != nil {
+ if err := encoder.SetURI("InstanceId").String(*v.InstanceId); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeOpDocumentUpdateHoursOfOperationInput(v *UpdateHoursOfOperationInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.Config != nil {
+ ok := object.Key("Config")
+ if err := awsRestjson1_serializeDocumentHoursOfOperationConfigList(v.Config, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.Description != nil {
+ ok := object.Key("Description")
+ ok.String(*v.Description)
+ }
+
+ if v.Name != nil {
+ ok := object.Key("Name")
+ ok.String(*v.Name)
}
- if v.Title != nil {
- ok := object.Key("Title")
- ok.String(*v.Title)
+ if v.TimeZone != nil {
+ ok := object.Key("TimeZone")
+ ok.String(*v.TimeZone)
}
return nil
}
-type awsRestjson1_serializeOpUpdateHoursOfOperation struct {
+type awsRestjson1_serializeOpUpdateHoursOfOperationOverride struct {
}
-func (*awsRestjson1_serializeOpUpdateHoursOfOperation) ID() string {
+func (*awsRestjson1_serializeOpUpdateHoursOfOperationOverride) ID() string {
return "OperationSerializer"
}
-func (m *awsRestjson1_serializeOpUpdateHoursOfOperation) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+func (m *awsRestjson1_serializeOpUpdateHoursOfOperationOverride) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
out middleware.SerializeOutput, metadata middleware.Metadata, err error,
) {
_, span := tracing.StartSpan(ctx, "OperationSerializer")
@@ -22904,13 +23613,13 @@ func (m *awsRestjson1_serializeOpUpdateHoursOfOperation) HandleSerialize(ctx con
return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
}
- input, ok := in.Parameters.(*UpdateHoursOfOperationInput)
+ input, ok := in.Parameters.(*UpdateHoursOfOperationOverrideInput)
_ = input
if !ok {
return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
}
- opPath, opQuery := httpbinding.SplitURI("/hours-of-operations/{InstanceId}/{HoursOfOperationId}")
+ opPath, opQuery := httpbinding.SplitURI("/hours-of-operations/{InstanceId}/{HoursOfOperationId}/overrides/{HoursOfOperationOverrideId}")
request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
request.Method = "POST"
@@ -22926,14 +23635,14 @@ func (m *awsRestjson1_serializeOpUpdateHoursOfOperation) HandleSerialize(ctx con
return out, metadata, &smithy.SerializationError{Err: err}
}
- if err := awsRestjson1_serializeOpHttpBindingsUpdateHoursOfOperationInput(input, restEncoder); err != nil {
+ if err := awsRestjson1_serializeOpHttpBindingsUpdateHoursOfOperationOverrideInput(input, restEncoder); err != nil {
return out, metadata, &smithy.SerializationError{Err: err}
}
restEncoder.SetHeader("Content-Type").String("application/json")
jsonEncoder := smithyjson.NewEncoder()
- if err := awsRestjson1_serializeOpDocumentUpdateHoursOfOperationInput(input, jsonEncoder.Value); err != nil {
+ if err := awsRestjson1_serializeOpDocumentUpdateHoursOfOperationOverrideInput(input, jsonEncoder.Value); err != nil {
return out, metadata, &smithy.SerializationError{Err: err}
}
@@ -22950,7 +23659,7 @@ func (m *awsRestjson1_serializeOpUpdateHoursOfOperation) HandleSerialize(ctx con
span.End()
return next.HandleSerialize(ctx, in)
}
-func awsRestjson1_serializeOpHttpBindingsUpdateHoursOfOperationInput(v *UpdateHoursOfOperationInput, encoder *httpbinding.Encoder) error {
+func awsRestjson1_serializeOpHttpBindingsUpdateHoursOfOperationOverrideInput(v *UpdateHoursOfOperationOverrideInput, encoder *httpbinding.Encoder) error {
if v == nil {
return fmt.Errorf("unsupported serialization of nil %T", v)
}
@@ -22964,6 +23673,15 @@ func awsRestjson1_serializeOpHttpBindingsUpdateHoursOfOperationInput(v *UpdateHo
}
}
+ if v.HoursOfOperationOverrideId == nil || len(*v.HoursOfOperationOverrideId) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member HoursOfOperationOverrideId must not be empty")}
+ }
+ if v.HoursOfOperationOverrideId != nil {
+ if err := encoder.SetURI("HoursOfOperationOverrideId").String(*v.HoursOfOperationOverrideId); err != nil {
+ return err
+ }
+ }
+
if v.InstanceId == nil || len(*v.InstanceId) == 0 {
return &smithy.SerializationError{Err: fmt.Errorf("input member InstanceId must not be empty")}
}
@@ -22976,13 +23694,13 @@ func awsRestjson1_serializeOpHttpBindingsUpdateHoursOfOperationInput(v *UpdateHo
return nil
}
-func awsRestjson1_serializeOpDocumentUpdateHoursOfOperationInput(v *UpdateHoursOfOperationInput, value smithyjson.Value) error {
+func awsRestjson1_serializeOpDocumentUpdateHoursOfOperationOverrideInput(v *UpdateHoursOfOperationOverrideInput, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
if v.Config != nil {
ok := object.Key("Config")
- if err := awsRestjson1_serializeDocumentHoursOfOperationConfigList(v.Config, ok); err != nil {
+ if err := awsRestjson1_serializeDocumentHoursOfOperationOverrideConfigList(v.Config, ok); err != nil {
return err
}
}
@@ -22992,16 +23710,21 @@ func awsRestjson1_serializeOpDocumentUpdateHoursOfOperationInput(v *UpdateHoursO
ok.String(*v.Description)
}
+ if v.EffectiveFrom != nil {
+ ok := object.Key("EffectiveFrom")
+ ok.String(*v.EffectiveFrom)
+ }
+
+ if v.EffectiveTill != nil {
+ ok := object.Key("EffectiveTill")
+ ok.String(*v.EffectiveTill)
+ }
+
if v.Name != nil {
ok := object.Key("Name")
ok.String(*v.Name)
}
- if v.TimeZone != nil {
- ok := object.Key("TimeZone")
- ok.String(*v.TimeZone)
- }
-
return nil
}
@@ -23217,6 +23940,107 @@ func awsRestjson1_serializeOpDocumentUpdateInstanceStorageConfigInput(v *UpdateI
return nil
}
+type awsRestjson1_serializeOpUpdateParticipantAuthentication struct {
+}
+
+func (*awsRestjson1_serializeOpUpdateParticipantAuthentication) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpUpdateParticipantAuthentication) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*UpdateParticipantAuthenticationInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/contact/update-participant-authentication")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "POST"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ restEncoder.SetHeader("Content-Type").String("application/json")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsRestjson1_serializeOpDocumentUpdateParticipantAuthenticationInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsUpdateParticipantAuthenticationInput(v *UpdateParticipantAuthenticationInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeOpDocumentUpdateParticipantAuthenticationInput(v *UpdateParticipantAuthenticationInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.Code != nil {
+ ok := object.Key("Code")
+ ok.String(*v.Code)
+ }
+
+ if v.Error != nil {
+ ok := object.Key("Error")
+ ok.String(*v.Error)
+ }
+
+ if v.ErrorDescription != nil {
+ ok := object.Key("ErrorDescription")
+ ok.String(*v.ErrorDescription)
+ }
+
+ if v.InstanceId != nil {
+ ok := object.Key("InstanceId")
+ ok.String(*v.InstanceId)
+ }
+
+ if v.State != nil {
+ ok := object.Key("State")
+ ok.String(*v.State)
+ }
+
+ return nil
+}
+
type awsRestjson1_serializeOpUpdateParticipantRoleConfig struct {
}
@@ -27326,6 +28150,16 @@ func awsRestjson1_serializeDocumentContactFlowModuleSearchCriteria(v *types.Cont
}
}
+ if len(v.StateCondition) > 0 {
+ ok := object.Key("StateCondition")
+ ok.String(string(v.StateCondition))
+ }
+
+ if len(v.StatusCondition) > 0 {
+ ok := object.Key("StatusCondition")
+ ok.String(string(v.StatusCondition))
+ }
+
if v.StringCondition != nil {
ok := object.Key("StringCondition")
if err := awsRestjson1_serializeDocumentStringCondition(v.StringCondition, ok); err != nil {
@@ -27669,6 +28503,28 @@ func awsRestjson1_serializeDocumentDataSetIds(v []string, value smithyjson.Value
return nil
}
+func awsRestjson1_serializeDocumentDateCondition(v *types.DateCondition, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if len(v.ComparisonType) > 0 {
+ ok := object.Key("ComparisonType")
+ ok.String(string(v.ComparisonType))
+ }
+
+ if v.FieldName != nil {
+ ok := object.Key("FieldName")
+ ok.String(*v.FieldName)
+ }
+
+ if v.Value != nil {
+ ok := object.Key("Value")
+ ok.String(*v.Value)
+ }
+
+ return nil
+}
+
func awsRestjson1_serializeDocumentDisconnectReason(v *types.DisconnectReason, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
@@ -28781,6 +29637,93 @@ func awsRestjson1_serializeDocumentHoursOfOperationConfigList(v []types.HoursOfO
return nil
}
+func awsRestjson1_serializeDocumentHoursOfOperationOverrideConfig(v *types.HoursOfOperationOverrideConfig, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if len(v.Day) > 0 {
+ ok := object.Key("Day")
+ ok.String(string(v.Day))
+ }
+
+ if v.EndTime != nil {
+ ok := object.Key("EndTime")
+ if err := awsRestjson1_serializeDocumentOverrideTimeSlice(v.EndTime, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.StartTime != nil {
+ ok := object.Key("StartTime")
+ if err := awsRestjson1_serializeDocumentOverrideTimeSlice(v.StartTime, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentHoursOfOperationOverrideConfigList(v []types.HoursOfOperationOverrideConfig, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ if err := awsRestjson1_serializeDocumentHoursOfOperationOverrideConfig(&v[i], av); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func awsRestjson1_serializeDocumentHoursOfOperationOverrideSearchConditionList(v []types.HoursOfOperationOverrideSearchCriteria, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ if err := awsRestjson1_serializeDocumentHoursOfOperationOverrideSearchCriteria(&v[i], av); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func awsRestjson1_serializeDocumentHoursOfOperationOverrideSearchCriteria(v *types.HoursOfOperationOverrideSearchCriteria, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.AndConditions != nil {
+ ok := object.Key("AndConditions")
+ if err := awsRestjson1_serializeDocumentHoursOfOperationOverrideSearchConditionList(v.AndConditions, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.DateCondition != nil {
+ ok := object.Key("DateCondition")
+ if err := awsRestjson1_serializeDocumentDateCondition(v.DateCondition, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.OrConditions != nil {
+ ok := object.Key("OrConditions")
+ if err := awsRestjson1_serializeDocumentHoursOfOperationOverrideSearchConditionList(v.OrConditions, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.StringCondition != nil {
+ ok := object.Key("StringCondition")
+ if err := awsRestjson1_serializeDocumentStringCondition(v.StringCondition, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
func awsRestjson1_serializeDocumentHoursOfOperationSearchConditionList(v []types.HoursOfOperationSearchCriteria, value smithyjson.Value) error {
array := value.Array()
defer array.Close()
@@ -29458,6 +30401,23 @@ func awsRestjson1_serializeDocumentOutboundRawMessage(v *types.OutboundRawMessag
return nil
}
+func awsRestjson1_serializeDocumentOverrideTimeSlice(v *types.OverrideTimeSlice, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.Hours != nil {
+ ok := object.Key("Hours")
+ ok.Integer(*v.Hours)
+ }
+
+ if v.Minutes != nil {
+ ok := object.Key("Minutes")
+ ok.Integer(*v.Minutes)
+ }
+
+ return nil
+}
+
func awsRestjson1_serializeDocumentParticipantCapabilities(v *types.ParticipantCapabilities, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
diff --git a/service/connect/snapshot/api_op_CreateHoursOfOperationOverride.go.snap b/service/connect/snapshot/api_op_CreateHoursOfOperationOverride.go.snap
new file mode 100644
index 00000000000..441c6c370c6
--- /dev/null
+++ b/service/connect/snapshot/api_op_CreateHoursOfOperationOverride.go.snap
@@ -0,0 +1,41 @@
+CreateHoursOfOperationOverride
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/connect/snapshot/api_op_DeleteHoursOfOperationOverride.go.snap b/service/connect/snapshot/api_op_DeleteHoursOfOperationOverride.go.snap
new file mode 100644
index 00000000000..31f2b834b54
--- /dev/null
+++ b/service/connect/snapshot/api_op_DeleteHoursOfOperationOverride.go.snap
@@ -0,0 +1,41 @@
+DeleteHoursOfOperationOverride
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/connect/snapshot/api_op_DescribeHoursOfOperationOverride.go.snap b/service/connect/snapshot/api_op_DescribeHoursOfOperationOverride.go.snap
new file mode 100644
index 00000000000..a2b5ed6e1d8
--- /dev/null
+++ b/service/connect/snapshot/api_op_DescribeHoursOfOperationOverride.go.snap
@@ -0,0 +1,41 @@
+DescribeHoursOfOperationOverride
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/connect/snapshot/api_op_GetEffectiveHoursOfOperations.go.snap b/service/connect/snapshot/api_op_GetEffectiveHoursOfOperations.go.snap
new file mode 100644
index 00000000000..ea93956aacd
--- /dev/null
+++ b/service/connect/snapshot/api_op_GetEffectiveHoursOfOperations.go.snap
@@ -0,0 +1,41 @@
+GetEffectiveHoursOfOperations
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/connect/snapshot/api_op_ListHoursOfOperationOverrides.go.snap b/service/connect/snapshot/api_op_ListHoursOfOperationOverrides.go.snap
new file mode 100644
index 00000000000..bf943366009
--- /dev/null
+++ b/service/connect/snapshot/api_op_ListHoursOfOperationOverrides.go.snap
@@ -0,0 +1,41 @@
+ListHoursOfOperationOverrides
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/connect/snapshot/api_op_SearchHoursOfOperationOverrides.go.snap b/service/connect/snapshot/api_op_SearchHoursOfOperationOverrides.go.snap
new file mode 100644
index 00000000000..51ada01f754
--- /dev/null
+++ b/service/connect/snapshot/api_op_SearchHoursOfOperationOverrides.go.snap
@@ -0,0 +1,41 @@
+SearchHoursOfOperationOverrides
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/connect/snapshot/api_op_UpdateHoursOfOperationOverride.go.snap b/service/connect/snapshot/api_op_UpdateHoursOfOperationOverride.go.snap
new file mode 100644
index 00000000000..eabf6e0c156
--- /dev/null
+++ b/service/connect/snapshot/api_op_UpdateHoursOfOperationOverride.go.snap
@@ -0,0 +1,41 @@
+UpdateHoursOfOperationOverride
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/connect/snapshot/api_op_UpdateParticipantAuthentication.go.snap b/service/connect/snapshot/api_op_UpdateParticipantAuthentication.go.snap
new file mode 100644
index 00000000000..ea3969fb97a
--- /dev/null
+++ b/service/connect/snapshot/api_op_UpdateParticipantAuthentication.go.snap
@@ -0,0 +1,41 @@
+UpdateParticipantAuthentication
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/connect/snapshot_test.go b/service/connect/snapshot_test.go
index 844221ba5ac..846e0ef814d 100644
--- a/service/connect/snapshot_test.go
+++ b/service/connect/snapshot_test.go
@@ -422,6 +422,18 @@ func TestCheckSnapshot_CreateHoursOfOperation(t *testing.T) {
}
}
+func TestCheckSnapshot_CreateHoursOfOperationOverride(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.CreateHoursOfOperationOverride(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "CreateHoursOfOperationOverride")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_CreateInstance(t *testing.T) {
svc := New(Options{})
_, err := svc.CreateInstance(context.Background(), nil, func(o *Options) {
@@ -758,6 +770,18 @@ func TestCheckSnapshot_DeleteHoursOfOperation(t *testing.T) {
}
}
+func TestCheckSnapshot_DeleteHoursOfOperationOverride(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.DeleteHoursOfOperationOverride(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "DeleteHoursOfOperationOverride")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_DeleteInstance(t *testing.T) {
svc := New(Options{})
_, err := svc.DeleteInstance(context.Background(), nil, func(o *Options) {
@@ -1082,6 +1106,18 @@ func TestCheckSnapshot_DescribeHoursOfOperation(t *testing.T) {
}
}
+func TestCheckSnapshot_DescribeHoursOfOperationOverride(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.DescribeHoursOfOperationOverride(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "DescribeHoursOfOperationOverride")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_DescribeInstance(t *testing.T) {
svc := New(Options{})
_, err := svc.DescribeInstance(context.Background(), nil, func(o *Options) {
@@ -1502,6 +1538,18 @@ func TestCheckSnapshot_GetCurrentUserData(t *testing.T) {
}
}
+func TestCheckSnapshot_GetEffectiveHoursOfOperations(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetEffectiveHoursOfOperations(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "GetEffectiveHoursOfOperations")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_GetFederationToken(t *testing.T) {
svc := New(Options{})
_, err := svc.GetFederationToken(context.Background(), nil, func(o *Options) {
@@ -1778,6 +1826,18 @@ func TestCheckSnapshot_ListFlowAssociations(t *testing.T) {
}
}
+func TestCheckSnapshot_ListHoursOfOperationOverrides(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListHoursOfOperationOverrides(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "ListHoursOfOperationOverrides")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_ListHoursOfOperations(t *testing.T) {
svc := New(Options{})
_, err := svc.ListHoursOfOperations(context.Background(), nil, func(o *Options) {
@@ -2318,6 +2378,18 @@ func TestCheckSnapshot_SearchEmailAddresses(t *testing.T) {
}
}
+func TestCheckSnapshot_SearchHoursOfOperationOverrides(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.SearchHoursOfOperationOverrides(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "SearchHoursOfOperationOverrides")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_SearchHoursOfOperations(t *testing.T) {
svc := New(Options{})
_, err := svc.SearchHoursOfOperations(context.Background(), nil, func(o *Options) {
@@ -2918,6 +2990,18 @@ func TestCheckSnapshot_UpdateHoursOfOperation(t *testing.T) {
}
}
+func TestCheckSnapshot_UpdateHoursOfOperationOverride(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateHoursOfOperationOverride(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "UpdateHoursOfOperationOverride")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_UpdateInstanceAttribute(t *testing.T) {
svc := New(Options{})
_, err := svc.UpdateInstanceAttribute(context.Background(), nil, func(o *Options) {
@@ -2942,6 +3026,18 @@ func TestCheckSnapshot_UpdateInstanceStorageConfig(t *testing.T) {
}
}
+func TestCheckSnapshot_UpdateParticipantAuthentication(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateParticipantAuthentication(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "UpdateParticipantAuthentication")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_UpdateParticipantRoleConfig(t *testing.T) {
svc := New(Options{})
_, err := svc.UpdateParticipantRoleConfig(context.Background(), nil, func(o *Options) {
@@ -3685,6 +3781,18 @@ func TestUpdateSnapshot_CreateHoursOfOperation(t *testing.T) {
}
}
+func TestUpdateSnapshot_CreateHoursOfOperationOverride(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.CreateHoursOfOperationOverride(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "CreateHoursOfOperationOverride")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_CreateInstance(t *testing.T) {
svc := New(Options{})
_, err := svc.CreateInstance(context.Background(), nil, func(o *Options) {
@@ -4021,6 +4129,18 @@ func TestUpdateSnapshot_DeleteHoursOfOperation(t *testing.T) {
}
}
+func TestUpdateSnapshot_DeleteHoursOfOperationOverride(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.DeleteHoursOfOperationOverride(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "DeleteHoursOfOperationOverride")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_DeleteInstance(t *testing.T) {
svc := New(Options{})
_, err := svc.DeleteInstance(context.Background(), nil, func(o *Options) {
@@ -4345,6 +4465,18 @@ func TestUpdateSnapshot_DescribeHoursOfOperation(t *testing.T) {
}
}
+func TestUpdateSnapshot_DescribeHoursOfOperationOverride(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.DescribeHoursOfOperationOverride(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "DescribeHoursOfOperationOverride")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_DescribeInstance(t *testing.T) {
svc := New(Options{})
_, err := svc.DescribeInstance(context.Background(), nil, func(o *Options) {
@@ -4765,6 +4897,18 @@ func TestUpdateSnapshot_GetCurrentUserData(t *testing.T) {
}
}
+func TestUpdateSnapshot_GetEffectiveHoursOfOperations(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetEffectiveHoursOfOperations(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "GetEffectiveHoursOfOperations")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_GetFederationToken(t *testing.T) {
svc := New(Options{})
_, err := svc.GetFederationToken(context.Background(), nil, func(o *Options) {
@@ -5041,6 +5185,18 @@ func TestUpdateSnapshot_ListFlowAssociations(t *testing.T) {
}
}
+func TestUpdateSnapshot_ListHoursOfOperationOverrides(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.ListHoursOfOperationOverrides(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "ListHoursOfOperationOverrides")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_ListHoursOfOperations(t *testing.T) {
svc := New(Options{})
_, err := svc.ListHoursOfOperations(context.Background(), nil, func(o *Options) {
@@ -5581,6 +5737,18 @@ func TestUpdateSnapshot_SearchEmailAddresses(t *testing.T) {
}
}
+func TestUpdateSnapshot_SearchHoursOfOperationOverrides(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.SearchHoursOfOperationOverrides(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "SearchHoursOfOperationOverrides")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_SearchHoursOfOperations(t *testing.T) {
svc := New(Options{})
_, err := svc.SearchHoursOfOperations(context.Background(), nil, func(o *Options) {
@@ -6181,6 +6349,18 @@ func TestUpdateSnapshot_UpdateHoursOfOperation(t *testing.T) {
}
}
+func TestUpdateSnapshot_UpdateHoursOfOperationOverride(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateHoursOfOperationOverride(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "UpdateHoursOfOperationOverride")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_UpdateInstanceAttribute(t *testing.T) {
svc := New(Options{})
_, err := svc.UpdateInstanceAttribute(context.Background(), nil, func(o *Options) {
@@ -6205,6 +6385,18 @@ func TestUpdateSnapshot_UpdateInstanceStorageConfig(t *testing.T) {
}
}
+func TestUpdateSnapshot_UpdateParticipantAuthentication(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateParticipantAuthentication(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "UpdateParticipantAuthentication")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_UpdateParticipantRoleConfig(t *testing.T) {
svc := New(Options{})
_, err := svc.UpdateParticipantRoleConfig(context.Background(), nil, func(o *Options) {
diff --git a/service/connect/types/enums.go b/service/connect/types/enums.go
index 1f0bc62e180..57d1f272cd3 100644
--- a/service/connect/types/enums.go
+++ b/service/connect/types/enums.go
@@ -522,6 +522,31 @@ func (CurrentMetricName) Values() []CurrentMetricName {
}
}
+type DateComparisonType string
+
+// Enum values for DateComparisonType
+const (
+ DateComparisonTypeGreaterThan DateComparisonType = "GREATER_THAN"
+ DateComparisonTypeLessThan DateComparisonType = "LESS_THAN"
+ DateComparisonTypeGreaterThanOrEqualTo DateComparisonType = "GREATER_THAN_OR_EQUAL_TO"
+ DateComparisonTypeLessThanOrEqualTo DateComparisonType = "LESS_THAN_OR_EQUAL_TO"
+ DateComparisonTypeEqualTo DateComparisonType = "EQUAL_TO"
+)
+
+// Values returns all known values for DateComparisonType. Note that this can be
+// expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (DateComparisonType) Values() []DateComparisonType {
+ return []DateComparisonType{
+ "GREATER_THAN",
+ "LESS_THAN",
+ "GREATER_THAN_OR_EQUAL_TO",
+ "LESS_THAN_OR_EQUAL_TO",
+ "EQUAL_TO",
+ }
+}
+
type DeviceType string
// Enum values for DeviceType
@@ -1072,6 +1097,7 @@ const (
InstanceAttributeTypeHighVolumeOutbound InstanceAttributeType = "HIGH_VOLUME_OUTBOUND"
InstanceAttributeTypeEnhancedContactMonitoring InstanceAttributeType = "ENHANCED_CONTACT_MONITORING"
InstanceAttributeTypeEnhancedChatMonitoring InstanceAttributeType = "ENHANCED_CHAT_MONITORING"
+ InstanceAttributeTypeMultiPartyChatConference InstanceAttributeType = "MULTI_PARTY_CHAT_CONFERENCE"
)
// Values returns all known values for InstanceAttributeType. Note that this can
@@ -1091,6 +1117,7 @@ func (InstanceAttributeType) Values() []InstanceAttributeType {
"HIGH_VOLUME_OUTBOUND",
"ENHANCED_CONTACT_MONITORING",
"ENHANCED_CHAT_MONITORING",
+ "MULTI_PARTY_CHAT_CONFERENCE",
}
}
@@ -1200,6 +1227,7 @@ const (
IntegrationTypeSesIdentity IntegrationType = "SES_IDENTITY"
IntegrationTypeAnalyticsConnector IntegrationType = "ANALYTICS_CONNECTOR"
IntegrationTypeCallTransferConnector IntegrationType = "CALL_TRANSFER_CONNECTOR"
+ IntegrationTypeCognitoUserPool IntegrationType = "COGNITO_USER_POOL"
)
// Values returns all known values for IntegrationType. Note that this can be
@@ -1221,6 +1249,7 @@ func (IntegrationType) Values() []IntegrationType {
"SES_IDENTITY",
"ANALYTICS_CONNECTOR",
"CALL_TRANSFER_CONNECTOR",
+ "COGNITO_USER_POOL",
}
}
@@ -1465,6 +1494,35 @@ func (OutboundMessageSourceType) Values() []OutboundMessageSourceType {
}
}
+type OverrideDays string
+
+// Enum values for OverrideDays
+const (
+ OverrideDaysSunday OverrideDays = "SUNDAY"
+ OverrideDaysMonday OverrideDays = "MONDAY"
+ OverrideDaysTuesday OverrideDays = "TUESDAY"
+ OverrideDaysWednesday OverrideDays = "WEDNESDAY"
+ OverrideDaysThursday OverrideDays = "THURSDAY"
+ OverrideDaysFriday OverrideDays = "FRIDAY"
+ OverrideDaysSaturday OverrideDays = "SATURDAY"
+)
+
+// Values returns all known values for OverrideDays. Note that this can be
+// expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (OverrideDays) Values() []OverrideDays {
+ return []OverrideDays{
+ "SUNDAY",
+ "MONDAY",
+ "TUESDAY",
+ "WEDNESDAY",
+ "THURSDAY",
+ "FRIDAY",
+ "SATURDAY",
+ }
+}
+
type ParticipantRole string
// Enum values for ParticipantRole
diff --git a/service/connect/types/errors.go b/service/connect/types/errors.go
index 2dc2bf2156c..7532395fc33 100644
--- a/service/connect/types/errors.go
+++ b/service/connect/types/errors.go
@@ -33,7 +33,7 @@ func (e *AccessDeniedException) ErrorCode() string {
}
func (e *AccessDeniedException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
-// A conditional check failed.
+// Request processing failed because dependent condition failed.
type ConditionalOperationFailedException struct {
Message *string
diff --git a/service/connect/types/types.go b/service/connect/types/types.go
index 1e92336922a..4e9d616586f 100644
--- a/service/connect/types/types.go
+++ b/service/connect/types/types.go
@@ -915,6 +915,13 @@ type Contact struct {
// The customer or external third party participant endpoint.
CustomerEndpoint *EndpointInfo
+ // The customer's identification number. For example, the CustomerId may be a
+ // customer number from your CRM. You can create a Lambda function to pull the
+ // unique customer ID of the caller from your CRM system. If you enable Amazon
+ // Connect Voice ID capability, this attribute is populated with the
+ // CustomerSpeakerId of the caller.
+ CustomerId *string
+
// Information about customer’s voice activity.
CustomerVoiceActivity *CustomerVoiceActivity
@@ -1045,6 +1052,9 @@ type ContactConfiguration struct {
IncludeRawMessage bool
// The role of the participant in the chat conversation.
+ //
+ // Only CUSTOMER is currently supported. Any other values other than CUSTOMER will
+ // result in an exception (4xx error).
ParticipantRole ParticipantRole
noSmithyDocumentSerde
@@ -1188,6 +1198,12 @@ type ContactFlowModuleSearchCriteria struct {
// A list of conditions which would be applied together with an OR condition.
OrConditions []ContactFlowModuleSearchCriteria
+ // The state of the flow.
+ StateCondition ContactFlowModuleState
+
+ // The status of the flow.
+ StatusCondition ContactFlowModuleStatus
+
// A leaf node condition which can be used to specify a string condition.
StringCondition *StringCondition
@@ -1640,6 +1656,22 @@ type CustomerVoiceActivity struct {
noSmithyDocumentSerde
}
+// An object to specify the hours of operation override date condition.
+type DateCondition struct {
+
+ // An object to specify the hours of operation override date condition
+ // comparisonType .
+ ComparisonType DateComparisonType
+
+ // An object to specify the hours of operation override date field.
+ FieldName *string
+
+ // An object to specify the hours of operation override date value.
+ Value *string
+
+ noSmithyDocumentSerde
+}
+
// Information about a reference when the referenceType is DATE . Otherwise, null.
type DateReference struct {
@@ -1766,6 +1798,18 @@ type DownloadUrlMetadata struct {
noSmithyDocumentSerde
}
+// Information about the hours of operations with the effective override applied.
+type EffectiveHoursOfOperations struct {
+
+ // The date that the hours of operation or overrides applies to.
+ Date *string
+
+ // Information about the hours of operations with the effective override applied.
+ OperationalHours []OperationalHour
+
+ noSmithyDocumentSerde
+}
+
// Contains information about a source or destination email address
type EmailAddressInfo struct {
@@ -3135,6 +3179,71 @@ type HoursOfOperationConfig struct {
noSmithyDocumentSerde
}
+// Information about the hours of operations override.
+type HoursOfOperationOverride struct {
+
+ // Configuration information for the hours of operation override: day, start time,
+ // and end time.
+ Config []HoursOfOperationOverrideConfig
+
+ // The description of the hours of operation override.
+ Description *string
+
+ // The date from which the hours of operation override would be effective.
+ EffectiveFrom *string
+
+ // The date till which the hours of operation override would be effective.
+ EffectiveTill *string
+
+ // The Amazon Resource Name (ARN) for the hours of operation.
+ HoursOfOperationArn *string
+
+ // The identifier for the hours of operation.
+ HoursOfOperationId *string
+
+ // The identifier for the hours of operation override.
+ HoursOfOperationOverrideId *string
+
+ // The name of the hours of operation override.
+ Name *string
+
+ noSmithyDocumentSerde
+}
+
+// Information about the hours of operation override config: day, start time, and
+// end time.
+type HoursOfOperationOverrideConfig struct {
+
+ // The day that the hours of operation override applies to.
+ Day OverrideDays
+
+ // The end time that your contact center closes if overrides are applied.
+ EndTime *OverrideTimeSlice
+
+ // The start time when your contact center opens if overrides are applied.
+ StartTime *OverrideTimeSlice
+
+ noSmithyDocumentSerde
+}
+
+// The search criteria to be used to return hours of operations overrides.
+type HoursOfOperationOverrideSearchCriteria struct {
+
+ // A list of conditions which would be applied together with an AND condition.
+ AndConditions []HoursOfOperationOverrideSearchCriteria
+
+ // A leaf node condition which can be used to specify a date condition.
+ DateCondition *DateCondition
+
+ // A list of conditions which would be applied together with an OR condition.
+ OrConditions []HoursOfOperationOverrideSearchCriteria
+
+ // A leaf node condition which can be used to specify a string condition.
+ StringCondition *StringCondition
+
+ noSmithyDocumentSerde
+}
+
// The search criteria to be used to return hours of operations.
type HoursOfOperationSearchCriteria struct {
@@ -3917,6 +4026,18 @@ type NumericQuestionPropertyValueAutomation struct {
noSmithyDocumentSerde
}
+// Information about the hours of operations with the effective override applied.
+type OperationalHour struct {
+
+ // The end time that your contact center closes.
+ End *OverrideTimeSlice
+
+ // The start time that your contact center opens.
+ Start *OverrideTimeSlice
+
+ noSmithyDocumentSerde
+}
+
// The additional recipients information of outbound email.
type OutboundAdditionalRecipients struct {
@@ -3988,6 +4109,22 @@ type OutboundRawMessage struct {
noSmithyDocumentSerde
}
+// The start time or end time for an hours of operation override.
+type OverrideTimeSlice struct {
+
+ // The hours.
+ //
+ // This member is required.
+ Hours *int32
+
+ // The minutes.
+ //
+ // This member is required.
+ Minutes *int32
+
+ noSmithyDocumentSerde
+}
+
// The configuration for the allowed video and screen sharing capabilities for
// participants present over the call. For more information, see [Set up in-app, web, video calling, and screen sharing capabilities]in the Amazon
// Connect Administrator Guide.
@@ -6817,11 +6954,15 @@ type UserIdentityInfo struct {
Email *string
// The first name. This is required if you are using Amazon Connect or SAML for
- // identity management.
+ // identity management. Inputs must be in Unicode Normalization Form C (NFC). Text
+ // containing characters in a non-NFC form (for example, decomposed characters or
+ // combining marks) are not accepted.
FirstName *string
// The last name. This is required if you are using Amazon Connect or SAML for
- // identity management.
+ // identity management. Inputs must be in Unicode Normalization Form C (NFC). Text
+ // containing characters in a non-NFC form (for example, decomposed characters or
+ // combining marks) are not accepted.
LastName *string
// The user's mobile number.
@@ -7301,6 +7442,8 @@ type VocabularySummary struct {
type VoiceRecordingConfiguration struct {
// Identifies which IVR track is being recorded.
+ //
+ // One and only one of the track configurations should be presented in the request.
IvrRecordingTrack IvrRecordingTrack
// Identifies which track is being recorded.
diff --git a/service/connect/validators.go b/service/connect/validators.go
index 038a36e9e65..f466b6e3468 100644
--- a/service/connect/validators.go
+++ b/service/connect/validators.go
@@ -610,6 +610,26 @@ func (m *validateOpCreateHoursOfOperation) HandleInitialize(ctx context.Context,
return next.HandleInitialize(ctx, in)
}
+type validateOpCreateHoursOfOperationOverride struct {
+}
+
+func (*validateOpCreateHoursOfOperationOverride) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpCreateHoursOfOperationOverride) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*CreateHoursOfOperationOverrideInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpCreateHoursOfOperationOverrideInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpCreateInstance struct {
}
@@ -1170,6 +1190,26 @@ func (m *validateOpDeleteHoursOfOperation) HandleInitialize(ctx context.Context,
return next.HandleInitialize(ctx, in)
}
+type validateOpDeleteHoursOfOperationOverride struct {
+}
+
+func (*validateOpDeleteHoursOfOperationOverride) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpDeleteHoursOfOperationOverride) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*DeleteHoursOfOperationOverrideInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpDeleteHoursOfOperationOverrideInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpDeleteInstance struct {
}
@@ -1710,6 +1750,26 @@ func (m *validateOpDescribeHoursOfOperation) HandleInitialize(ctx context.Contex
return next.HandleInitialize(ctx, in)
}
+type validateOpDescribeHoursOfOperationOverride struct {
+}
+
+func (*validateOpDescribeHoursOfOperationOverride) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpDescribeHoursOfOperationOverride) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*DescribeHoursOfOperationOverrideInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpDescribeHoursOfOperationOverrideInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpDescribeInstanceAttribute struct {
}
@@ -2410,6 +2470,26 @@ func (m *validateOpGetCurrentUserData) HandleInitialize(ctx context.Context, in
return next.HandleInitialize(ctx, in)
}
+type validateOpGetEffectiveHoursOfOperations struct {
+}
+
+func (*validateOpGetEffectiveHoursOfOperations) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpGetEffectiveHoursOfOperations) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*GetEffectiveHoursOfOperationsInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpGetEffectiveHoursOfOperationsInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpGetFederationToken struct {
}
@@ -2870,6 +2950,26 @@ func (m *validateOpListFlowAssociations) HandleInitialize(ctx context.Context, i
return next.HandleInitialize(ctx, in)
}
+type validateOpListHoursOfOperationOverrides struct {
+}
+
+func (*validateOpListHoursOfOperationOverrides) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpListHoursOfOperationOverrides) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*ListHoursOfOperationOverridesInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpListHoursOfOperationOverridesInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpListHoursOfOperations struct {
}
@@ -3710,6 +3810,26 @@ func (m *validateOpSearchEmailAddresses) HandleInitialize(ctx context.Context, i
return next.HandleInitialize(ctx, in)
}
+type validateOpSearchHoursOfOperationOverrides struct {
+}
+
+func (*validateOpSearchHoursOfOperationOverrides) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpSearchHoursOfOperationOverrides) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*SearchHoursOfOperationOverridesInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpSearchHoursOfOperationOverridesInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpSearchHoursOfOperations struct {
}
@@ -4710,6 +4830,26 @@ func (m *validateOpUpdateHoursOfOperation) HandleInitialize(ctx context.Context,
return next.HandleInitialize(ctx, in)
}
+type validateOpUpdateHoursOfOperationOverride struct {
+}
+
+func (*validateOpUpdateHoursOfOperationOverride) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpUpdateHoursOfOperationOverride) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*UpdateHoursOfOperationOverrideInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpUpdateHoursOfOperationOverrideInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpUpdateInstanceAttribute struct {
}
@@ -4750,6 +4890,26 @@ func (m *validateOpUpdateInstanceStorageConfig) HandleInitialize(ctx context.Con
return next.HandleInitialize(ctx, in)
}
+type validateOpUpdateParticipantAuthentication struct {
+}
+
+func (*validateOpUpdateParticipantAuthentication) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpUpdateParticipantAuthentication) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*UpdateParticipantAuthenticationInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpUpdateParticipantAuthenticationInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpUpdateParticipantRoleConfig struct {
}
@@ -5510,6 +5670,10 @@ func addOpCreateHoursOfOperationValidationMiddleware(stack *middleware.Stack) er
return stack.Initialize.Add(&validateOpCreateHoursOfOperation{}, middleware.After)
}
+func addOpCreateHoursOfOperationOverrideValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpCreateHoursOfOperationOverride{}, middleware.After)
+}
+
func addOpCreateInstanceValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpCreateInstance{}, middleware.After)
}
@@ -5622,6 +5786,10 @@ func addOpDeleteHoursOfOperationValidationMiddleware(stack *middleware.Stack) er
return stack.Initialize.Add(&validateOpDeleteHoursOfOperation{}, middleware.After)
}
+func addOpDeleteHoursOfOperationOverrideValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpDeleteHoursOfOperationOverride{}, middleware.After)
+}
+
func addOpDeleteInstanceValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpDeleteInstance{}, middleware.After)
}
@@ -5730,6 +5898,10 @@ func addOpDescribeHoursOfOperationValidationMiddleware(stack *middleware.Stack)
return stack.Initialize.Add(&validateOpDescribeHoursOfOperation{}, middleware.After)
}
+func addOpDescribeHoursOfOperationOverrideValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpDescribeHoursOfOperationOverride{}, middleware.After)
+}
+
func addOpDescribeInstanceAttributeValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpDescribeInstanceAttribute{}, middleware.After)
}
@@ -5870,6 +6042,10 @@ func addOpGetCurrentUserDataValidationMiddleware(stack *middleware.Stack) error
return stack.Initialize.Add(&validateOpGetCurrentUserData{}, middleware.After)
}
+func addOpGetEffectiveHoursOfOperationsValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpGetEffectiveHoursOfOperations{}, middleware.After)
+}
+
func addOpGetFederationTokenValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpGetFederationToken{}, middleware.After)
}
@@ -5962,6 +6138,10 @@ func addOpListFlowAssociationsValidationMiddleware(stack *middleware.Stack) erro
return stack.Initialize.Add(&validateOpListFlowAssociations{}, middleware.After)
}
+func addOpListHoursOfOperationOverridesValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpListHoursOfOperationOverrides{}, middleware.After)
+}
+
func addOpListHoursOfOperationsValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpListHoursOfOperations{}, middleware.After)
}
@@ -6130,6 +6310,10 @@ func addOpSearchEmailAddressesValidationMiddleware(stack *middleware.Stack) erro
return stack.Initialize.Add(&validateOpSearchEmailAddresses{}, middleware.After)
}
+func addOpSearchHoursOfOperationOverridesValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpSearchHoursOfOperationOverrides{}, middleware.After)
+}
+
func addOpSearchHoursOfOperationsValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpSearchHoursOfOperations{}, middleware.After)
}
@@ -6330,6 +6514,10 @@ func addOpUpdateHoursOfOperationValidationMiddleware(stack *middleware.Stack) er
return stack.Initialize.Add(&validateOpUpdateHoursOfOperation{}, middleware.After)
}
+func addOpUpdateHoursOfOperationOverrideValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpUpdateHoursOfOperationOverride{}, middleware.After)
+}
+
func addOpUpdateInstanceAttributeValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpUpdateInstanceAttribute{}, middleware.After)
}
@@ -6338,6 +6526,10 @@ func addOpUpdateInstanceStorageConfigValidationMiddleware(stack *middleware.Stac
return stack.Initialize.Add(&validateOpUpdateInstanceStorageConfig{}, middleware.After)
}
+func addOpUpdateParticipantAuthenticationValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpUpdateParticipantAuthentication{}, middleware.After)
+}
+
func addOpUpdateParticipantRoleConfigValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpUpdateParticipantRoleConfig{}, middleware.After)
}
@@ -7222,6 +7414,45 @@ func validateHoursOfOperationConfigList(v []types.HoursOfOperationConfig) error
}
}
+func validateHoursOfOperationOverrideConfig(v *types.HoursOfOperationOverrideConfig) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "HoursOfOperationOverrideConfig"}
+ if v.StartTime != nil {
+ if err := validateOverrideTimeSlice(v.StartTime); err != nil {
+ invalidParams.AddNested("StartTime", err.(smithy.InvalidParamsError))
+ }
+ }
+ if v.EndTime != nil {
+ if err := validateOverrideTimeSlice(v.EndTime); err != nil {
+ invalidParams.AddNested("EndTime", err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateHoursOfOperationOverrideConfigList(v []types.HoursOfOperationOverrideConfig) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "HoursOfOperationOverrideConfigList"}
+ for i := range v {
+ if err := validateHoursOfOperationOverrideConfig(&v[i]); err != nil {
+ invalidParams.AddNested(fmt.Sprintf("[%d]", i), err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateHoursOfOperationTimeSlice(v *types.HoursOfOperationTimeSlice) error {
if v == nil {
return nil
@@ -7548,6 +7779,24 @@ func validateOutboundRawMessage(v *types.OutboundRawMessage) error {
}
}
+func validateOverrideTimeSlice(v *types.OverrideTimeSlice) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "OverrideTimeSlice"}
+ if v.Hours == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("Hours"))
+ }
+ if v.Minutes == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("Minutes"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateParticipantDetails(v *types.ParticipantDetails) error {
if v == nil {
return nil
@@ -9064,6 +9313,40 @@ func validateOpCreateHoursOfOperationInput(v *CreateHoursOfOperationInput) error
}
}
+func validateOpCreateHoursOfOperationOverrideInput(v *CreateHoursOfOperationOverrideInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "CreateHoursOfOperationOverrideInput"}
+ if v.InstanceId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("InstanceId"))
+ }
+ if v.HoursOfOperationId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("HoursOfOperationId"))
+ }
+ if v.Name == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("Name"))
+ }
+ if v.Config == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("Config"))
+ } else if v.Config != nil {
+ if err := validateHoursOfOperationOverrideConfigList(v.Config); err != nil {
+ invalidParams.AddNested("Config", err.(smithy.InvalidParamsError))
+ }
+ }
+ if v.EffectiveFrom == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("EffectiveFrom"))
+ }
+ if v.EffectiveTill == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("EffectiveTill"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpCreateInstanceInput(v *CreateInstanceInput) error {
if v == nil {
return nil
@@ -9688,6 +9971,27 @@ func validateOpDeleteHoursOfOperationInput(v *DeleteHoursOfOperationInput) error
}
}
+func validateOpDeleteHoursOfOperationOverrideInput(v *DeleteHoursOfOperationOverrideInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "DeleteHoursOfOperationOverrideInput"}
+ if v.InstanceId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("InstanceId"))
+ }
+ if v.HoursOfOperationId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("HoursOfOperationId"))
+ }
+ if v.HoursOfOperationOverrideId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("HoursOfOperationOverrideId"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpDeleteInstanceInput(v *DeleteInstanceInput) error {
if v == nil {
return nil
@@ -10177,6 +10481,27 @@ func validateOpDescribeHoursOfOperationInput(v *DescribeHoursOfOperationInput) e
}
}
+func validateOpDescribeHoursOfOperationOverrideInput(v *DescribeHoursOfOperationOverrideInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "DescribeHoursOfOperationOverrideInput"}
+ if v.InstanceId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("InstanceId"))
+ }
+ if v.HoursOfOperationId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("HoursOfOperationId"))
+ }
+ if v.HoursOfOperationOverrideId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("HoursOfOperationOverrideId"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpDescribeInstanceAttributeInput(v *DescribeInstanceAttributeInput) error {
if v == nil {
return nil
@@ -10838,6 +11163,30 @@ func validateOpGetCurrentUserDataInput(v *GetCurrentUserDataInput) error {
}
}
+func validateOpGetEffectiveHoursOfOperationsInput(v *GetEffectiveHoursOfOperationsInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "GetEffectiveHoursOfOperationsInput"}
+ if v.InstanceId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("InstanceId"))
+ }
+ if v.HoursOfOperationId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("HoursOfOperationId"))
+ }
+ if v.FromDate == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("FromDate"))
+ }
+ if v.ToDate == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ToDate"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpGetFederationTokenInput(v *GetFederationTokenInput) error {
if v == nil {
return nil
@@ -11243,6 +11592,24 @@ func validateOpListFlowAssociationsInput(v *ListFlowAssociationsInput) error {
}
}
+func validateOpListHoursOfOperationOverridesInput(v *ListHoursOfOperationOverridesInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "ListHoursOfOperationOverridesInput"}
+ if v.InstanceId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("InstanceId"))
+ }
+ if v.HoursOfOperationId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("HoursOfOperationId"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpListHoursOfOperationsInput(v *ListHoursOfOperationsInput) error {
if v == nil {
return nil
@@ -11956,6 +12323,21 @@ func validateOpSearchEmailAddressesInput(v *SearchEmailAddressesInput) error {
}
}
+func validateOpSearchHoursOfOperationOverridesInput(v *SearchHoursOfOperationOverridesInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "SearchHoursOfOperationOverridesInput"}
+ if v.InstanceId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("InstanceId"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpSearchHoursOfOperationsInput(v *SearchHoursOfOperationsInput) error {
if v == nil {
return nil
@@ -13053,6 +13435,32 @@ func validateOpUpdateHoursOfOperationInput(v *UpdateHoursOfOperationInput) error
}
}
+func validateOpUpdateHoursOfOperationOverrideInput(v *UpdateHoursOfOperationOverrideInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "UpdateHoursOfOperationOverrideInput"}
+ if v.InstanceId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("InstanceId"))
+ }
+ if v.HoursOfOperationId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("HoursOfOperationId"))
+ }
+ if v.HoursOfOperationOverrideId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("HoursOfOperationOverrideId"))
+ }
+ if v.Config != nil {
+ if err := validateHoursOfOperationOverrideConfigList(v.Config); err != nil {
+ invalidParams.AddNested("Config", err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpUpdateInstanceAttributeInput(v *UpdateInstanceAttributeInput) error {
if v == nil {
return nil
@@ -13102,6 +13510,24 @@ func validateOpUpdateInstanceStorageConfigInput(v *UpdateInstanceStorageConfigIn
}
}
+func validateOpUpdateParticipantAuthenticationInput(v *UpdateParticipantAuthenticationInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "UpdateParticipantAuthenticationInput"}
+ if v.State == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("State"))
+ }
+ if v.InstanceId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("InstanceId"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpUpdateParticipantRoleConfigInput(v *UpdateParticipantRoleConfigInput) error {
if v == nil {
return nil
diff --git a/service/connectparticipant/CHANGELOG.md b/service/connectparticipant/CHANGELOG.md
index 88bae9e5d37..a80d725fab1 100644
--- a/service/connectparticipant/CHANGELOG.md
+++ b/service/connectparticipant/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.28.0 (2024-12-18)
+
+* **Feature**: This release adds support for the GetAuthenticationUrl and CancelParticipantAuthentication APIs used for customer authentication within Amazon Connect chats. There are also minor updates to the GetAttachment API.
+
# v1.27.7 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/connectparticipant/api_op_CancelParticipantAuthentication.go b/service/connectparticipant/api_op_CancelParticipantAuthentication.go
new file mode 100644
index 00000000000..08a87ba1e87
--- /dev/null
+++ b/service/connectparticipant/api_op_CancelParticipantAuthentication.go
@@ -0,0 +1,161 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package connectparticipant
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Cancels the authentication session. The opted out branch of the Authenticate
+// Customer flow block will be taken.
+//
+// The current supported channel is chat. This API is not supported for Apple
+// Messages for Business, WhatsApp, or SMS chats.
+func (c *Client) CancelParticipantAuthentication(ctx context.Context, params *CancelParticipantAuthenticationInput, optFns ...func(*Options)) (*CancelParticipantAuthenticationOutput, error) {
+ if params == nil {
+ params = &CancelParticipantAuthenticationInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "CancelParticipantAuthentication", params, optFns, c.addOperationCancelParticipantAuthenticationMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*CancelParticipantAuthenticationOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type CancelParticipantAuthenticationInput struct {
+
+ // The authentication token associated with the participant's connection.
+ //
+ // This member is required.
+ ConnectionToken *string
+
+ // The sessionId provided in the authenticationInitiated event.
+ //
+ // This member is required.
+ SessionId *string
+
+ noSmithyDocumentSerde
+}
+
+type CancelParticipantAuthenticationOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationCancelParticipantAuthenticationMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpCancelParticipantAuthentication{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpCancelParticipantAuthentication{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "CancelParticipantAuthentication"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpCancelParticipantAuthenticationValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opCancelParticipantAuthentication(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opCancelParticipantAuthentication(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "CancelParticipantAuthentication",
+ }
+}
diff --git a/service/connectparticipant/api_op_CompleteAttachmentUpload.go b/service/connectparticipant/api_op_CompleteAttachmentUpload.go
index 1b4bf51e1cd..e73f2ee72c8 100644
--- a/service/connectparticipant/api_op_CompleteAttachmentUpload.go
+++ b/service/connectparticipant/api_op_CompleteAttachmentUpload.go
@@ -14,11 +14,14 @@ import (
// pre-signed URL provided in StartAttachmentUpload API. A conflict exception is
// thrown when an attachment with that identifier is already being uploaded.
//
+// For security recommendations, see [Amazon Connect Chat security best practices].
+//
// ConnectionToken is used for invoking this API instead of ParticipantToken .
//
// The Amazon Connect Participant Service APIs do not use [Signature Version 4 authentication].
//
// [Signature Version 4 authentication]: https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html
+// [Amazon Connect Chat security best practices]: https://docs.aws.amazon.com/connect/latest/adminguide/security-best-practices.html#bp-security-chat
func (c *Client) CompleteAttachmentUpload(ctx context.Context, params *CompleteAttachmentUploadInput, optFns ...func(*Options)) (*CompleteAttachmentUploadOutput, error) {
if params == nil {
params = &CompleteAttachmentUploadInput{}
diff --git a/service/connectparticipant/api_op_CreateParticipantConnection.go b/service/connectparticipant/api_op_CreateParticipantConnection.go
index 0ffe1921375..7a423ca8344 100644
--- a/service/connectparticipant/api_op_CreateParticipantConnection.go
+++ b/service/connectparticipant/api_op_CreateParticipantConnection.go
@@ -13,6 +13,8 @@ import (
// Creates the participant's connection.
//
+// For security recommendations, see [Amazon Connect Chat security best practices].
+//
// ParticipantToken is used for invoking this API instead of ConnectionToken .
//
// The participant token is valid for the lifetime of the participant – until they
@@ -46,6 +48,7 @@ import (
// [StartContactStreaming]: https://docs.aws.amazon.com/connect/latest/APIReference/API_StartContactStreaming.html
// [Enable real-time chat message streaming]: https://docs.aws.amazon.com/connect/latest/adminguide/chat-message-streaming.html
// [Signature Version 4 authentication]: https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html
+// [Amazon Connect Chat security best practices]: https://docs.aws.amazon.com/connect/latest/adminguide/security-best-practices.html#bp-security-chat
func (c *Client) CreateParticipantConnection(ctx context.Context, params *CreateParticipantConnectionInput, optFns ...func(*Options)) (*CreateParticipantConnectionOutput, error) {
if params == nil {
params = &CreateParticipantConnectionInput{}
diff --git a/service/connectparticipant/api_op_DescribeView.go b/service/connectparticipant/api_op_DescribeView.go
index 3c0fa577771..e27ef349f08 100644
--- a/service/connectparticipant/api_op_DescribeView.go
+++ b/service/connectparticipant/api_op_DescribeView.go
@@ -12,6 +12,10 @@ import (
)
// Retrieves the view for the specified view token.
+//
+// For security recommendations, see [Amazon Connect Chat security best practices].
+//
+// [Amazon Connect Chat security best practices]: https://docs.aws.amazon.com/connect/latest/adminguide/security-best-practices.html#bp-security-chat
func (c *Client) DescribeView(ctx context.Context, params *DescribeViewInput, optFns ...func(*Options)) (*DescribeViewOutput, error) {
if params == nil {
params = &DescribeViewInput{}
diff --git a/service/connectparticipant/api_op_DisconnectParticipant.go b/service/connectparticipant/api_op_DisconnectParticipant.go
index fac72e2ea85..88df24ef7c6 100644
--- a/service/connectparticipant/api_op_DisconnectParticipant.go
+++ b/service/connectparticipant/api_op_DisconnectParticipant.go
@@ -12,11 +12,14 @@ import (
// Disconnects a participant.
//
+// For security recommendations, see [Amazon Connect Chat security best practices].
+//
// ConnectionToken is used for invoking this API instead of ParticipantToken .
//
// The Amazon Connect Participant Service APIs do not use [Signature Version 4 authentication].
//
// [Signature Version 4 authentication]: https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html
+// [Amazon Connect Chat security best practices]: https://docs.aws.amazon.com/connect/latest/adminguide/security-best-practices.html#bp-security-chat
func (c *Client) DisconnectParticipant(ctx context.Context, params *DisconnectParticipantInput, optFns ...func(*Options)) (*DisconnectParticipantOutput, error) {
if params == nil {
params = &DisconnectParticipantInput{}
diff --git a/service/connectparticipant/api_op_GetAttachment.go b/service/connectparticipant/api_op_GetAttachment.go
index 204fb4b5892..9d59761511b 100644
--- a/service/connectparticipant/api_op_GetAttachment.go
+++ b/service/connectparticipant/api_op_GetAttachment.go
@@ -13,11 +13,14 @@ import (
// Provides a pre-signed URL for download of a completed attachment. This is an
// asynchronous API for use with active contacts.
//
+// For security recommendations, see [Amazon Connect Chat security best practices].
+//
// ConnectionToken is used for invoking this API instead of ParticipantToken .
//
// The Amazon Connect Participant Service APIs do not use [Signature Version 4 authentication].
//
// [Signature Version 4 authentication]: https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html
+// [Amazon Connect Chat security best practices]: https://docs.aws.amazon.com/connect/latest/adminguide/security-best-practices.html#bp-security-chat
func (c *Client) GetAttachment(ctx context.Context, params *GetAttachmentInput, optFns ...func(*Options)) (*GetAttachmentOutput, error) {
if params == nil {
params = &GetAttachmentInput{}
@@ -45,11 +48,20 @@ type GetAttachmentInput struct {
// This member is required.
ConnectionToken *string
+ // The expiration time of the URL in ISO timestamp. It's specified in ISO 8601
+ // format: yyyy-MM-ddThh:mm:ss.SSSZ. For example, 2019-11-08T02:41:28.172Z.
+ UrlExpiryInSeconds *int32
+
noSmithyDocumentSerde
}
type GetAttachmentOutput struct {
+ // The size of the attachment in bytes.
+ //
+ // This member is required.
+ AttachmentSizeInBytes *int64
+
// This is the pre-signed URL that can be used for uploading the file to Amazon S3
// when used in response to [StartAttachmentUpload].
//
diff --git a/service/connectparticipant/api_op_GetAuthenticationUrl.go b/service/connectparticipant/api_op_GetAuthenticationUrl.go
new file mode 100644
index 00000000000..b0442fce078
--- /dev/null
+++ b/service/connectparticipant/api_op_GetAuthenticationUrl.go
@@ -0,0 +1,180 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package connectparticipant
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Retrieves the AuthenticationUrl for the current authentication session for the
+// AuthenticateCustomer flow block.
+//
+// For security recommendations, see [Amazon Connect Chat security best practices].
+//
+// - This API can only be called within one minute of receiving the
+// authenticationInitiated event.
+//
+// - The current supported channel is chat. This API is not supported for Apple
+// Messages for Business, WhatsApp, or SMS chats.
+//
+// [Amazon Connect Chat security best practices]: https://docs.aws.amazon.com/connect/latest/adminguide/security-best-practices.html#bp-security-chat
+func (c *Client) GetAuthenticationUrl(ctx context.Context, params *GetAuthenticationUrlInput, optFns ...func(*Options)) (*GetAuthenticationUrlOutput, error) {
+ if params == nil {
+ params = &GetAuthenticationUrlInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "GetAuthenticationUrl", params, optFns, c.addOperationGetAuthenticationUrlMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*GetAuthenticationUrlOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type GetAuthenticationUrlInput struct {
+
+ // The authentication token associated with the participant's connection.
+ //
+ // This member is required.
+ ConnectionToken *string
+
+ // The URL where the customer will be redirected after Amazon Cognito authorizes
+ // the user.
+ //
+ // This member is required.
+ RedirectUri *string
+
+ // The sessionId provided in the authenticationInitiated event.
+ //
+ // This member is required.
+ SessionId *string
+
+ noSmithyDocumentSerde
+}
+
+type GetAuthenticationUrlOutput struct {
+
+ // The URL where the customer will sign in to the identity provider. This URL
+ // contains the authorize endpoint for the Cognito UserPool used in the
+ // authentication.
+ AuthenticationUrl *string
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationGetAuthenticationUrlMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpGetAuthenticationUrl{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpGetAuthenticationUrl{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "GetAuthenticationUrl"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpGetAuthenticationUrlValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opGetAuthenticationUrl(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opGetAuthenticationUrl(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "GetAuthenticationUrl",
+ }
+}
diff --git a/service/connectparticipant/api_op_GetTranscript.go b/service/connectparticipant/api_op_GetTranscript.go
index aaa9dffa6b1..3f731b5f3bb 100644
--- a/service/connectparticipant/api_op_GetTranscript.go
+++ b/service/connectparticipant/api_op_GetTranscript.go
@@ -15,6 +15,8 @@ import (
// For information about accessing past chat contact transcripts for a persistent
// chat, see [Enable persistent chat].
//
+// For security recommendations, see [Amazon Connect Chat security best practices].
+//
// If you have a process that consumes events in the transcript of an chat that
// has ended, note that chat transcripts contain the following event content types
// if the event has occurred during the chat session:
@@ -35,6 +37,7 @@ import (
//
// [Enable persistent chat]: https://docs.aws.amazon.com/connect/latest/adminguide/chat-persistence.html
// [Signature Version 4 authentication]: https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html
+// [Amazon Connect Chat security best practices]: https://docs.aws.amazon.com/connect/latest/adminguide/security-best-practices.html#bp-security-chat
func (c *Client) GetTranscript(ctx context.Context, params *GetTranscriptInput, optFns ...func(*Options)) (*GetTranscriptOutput, error) {
if params == nil {
params = &GetTranscriptInput{}
diff --git a/service/connectparticipant/api_op_SendEvent.go b/service/connectparticipant/api_op_SendEvent.go
index fc8522b7206..e1120c4723b 100644
--- a/service/connectparticipant/api_op_SendEvent.go
+++ b/service/connectparticipant/api_op_SendEvent.go
@@ -18,12 +18,15 @@ import (
// active participants in the chat. Using the SendEvent API for message receipts
// when a supervisor is barged-in will result in a conflict exception.
//
+// For security recommendations, see [Amazon Connect Chat security best practices].
+//
// ConnectionToken is used for invoking this API instead of ParticipantToken .
//
// The Amazon Connect Participant Service APIs do not use [Signature Version 4 authentication].
//
// [CreateParticipantConnection]: https://docs.aws.amazon.com/connect-participant/latest/APIReference/API_CreateParticipantConnection.html
// [Signature Version 4 authentication]: https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html
+// [Amazon Connect Chat security best practices]: https://docs.aws.amazon.com/connect/latest/adminguide/security-best-practices.html#bp-security-chat
func (c *Client) SendEvent(ctx context.Context, params *SendEventInput, optFns ...func(*Options)) (*SendEventOutput, error) {
if params == nil {
params = &SendEventInput{}
diff --git a/service/connectparticipant/api_op_SendMessage.go b/service/connectparticipant/api_op_SendMessage.go
index 06d66cb4454..416ca15312e 100644
--- a/service/connectparticipant/api_op_SendMessage.go
+++ b/service/connectparticipant/api_op_SendMessage.go
@@ -12,11 +12,14 @@ import (
// Sends a message.
//
+// For security recommendations, see [Amazon Connect Chat security best practices].
+//
// ConnectionToken is used for invoking this API instead of ParticipantToken .
//
// The Amazon Connect Participant Service APIs do not use [Signature Version 4 authentication].
//
// [Signature Version 4 authentication]: https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html
+// [Amazon Connect Chat security best practices]: https://docs.aws.amazon.com/connect/latest/adminguide/security-best-practices.html#bp-security-chat
func (c *Client) SendMessage(ctx context.Context, params *SendMessageInput, optFns ...func(*Options)) (*SendMessageOutput, error) {
if params == nil {
params = &SendMessageInput{}
diff --git a/service/connectparticipant/api_op_StartAttachmentUpload.go b/service/connectparticipant/api_op_StartAttachmentUpload.go
index 1017f3fab02..327235a4853 100644
--- a/service/connectparticipant/api_op_StartAttachmentUpload.go
+++ b/service/connectparticipant/api_op_StartAttachmentUpload.go
@@ -14,11 +14,14 @@ import (
// Provides a pre-signed Amazon S3 URL in response for uploading the file directly
// to S3.
//
+// For security recommendations, see [Amazon Connect Chat security best practices].
+//
// ConnectionToken is used for invoking this API instead of ParticipantToken .
//
// The Amazon Connect Participant Service APIs do not use [Signature Version 4 authentication].
//
// [Signature Version 4 authentication]: https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html
+// [Amazon Connect Chat security best practices]: https://docs.aws.amazon.com/connect/latest/adminguide/security-best-practices.html#bp-security-chat
func (c *Client) StartAttachmentUpload(ctx context.Context, params *StartAttachmentUploadInput, optFns ...func(*Options)) (*StartAttachmentUploadOutput, error) {
if params == nil {
params = &StartAttachmentUploadInput{}
@@ -76,7 +79,7 @@ type StartAttachmentUploadOutput struct {
// A unique identifier for the attachment.
AttachmentId *string
- // Fields to be used while uploading the attachment.
+ // The headers to be provided while uploading the file to the URL.
UploadMetadata *types.UploadMetadata
// Metadata pertaining to the operation's result.
diff --git a/service/connectparticipant/deserializers.go b/service/connectparticipant/deserializers.go
index 87e1927ce4f..3aa23d46343 100644
--- a/service/connectparticipant/deserializers.go
+++ b/service/connectparticipant/deserializers.go
@@ -29,6 +29,103 @@ func deserializeS3Expires(v string) (*time.Time, error) {
return &t, nil
}
+type awsRestjson1_deserializeOpCancelParticipantAuthentication struct {
+}
+
+func (*awsRestjson1_deserializeOpCancelParticipantAuthentication) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpCancelParticipantAuthentication) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorCancelParticipantAuthentication(response, &metadata)
+ }
+ output := &CancelParticipantAuthenticationOutput{}
+ out.Result = output
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorCancelParticipantAuthentication(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
type awsRestjson1_deserializeOpCompleteAttachmentUpload struct {
}
@@ -706,6 +803,19 @@ func awsRestjson1_deserializeOpDocumentGetAttachmentOutput(v **GetAttachmentOutp
for key, value := range shape {
switch key {
+ case "AttachmentSizeInBytes":
+ if value != nil {
+ jtv, ok := value.(json.Number)
+ if !ok {
+ return fmt.Errorf("expected AttachmentSizeInBytes to be json.Number, got %T instead", value)
+ }
+ i64, err := jtv.Int64()
+ if err != nil {
+ return err
+ }
+ sv.AttachmentSizeInBytes = ptr.Int64(i64)
+ }
+
case "Url":
if value != nil {
jtv, ok := value.(string)
@@ -733,6 +843,171 @@ func awsRestjson1_deserializeOpDocumentGetAttachmentOutput(v **GetAttachmentOutp
return nil
}
+type awsRestjson1_deserializeOpGetAuthenticationUrl struct {
+}
+
+func (*awsRestjson1_deserializeOpGetAuthenticationUrl) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpGetAuthenticationUrl) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorGetAuthenticationUrl(response, &metadata)
+ }
+ output := &GetAuthenticationUrlOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentGetAuthenticationUrlOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorGetAuthenticationUrl(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("AccessDeniedException", errorCode):
+ return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+ case strings.EqualFold("InternalServerException", errorCode):
+ return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("ValidationException", errorCode):
+ return awsRestjson1_deserializeErrorValidationException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentGetAuthenticationUrlOutput(v **GetAuthenticationUrlOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *GetAuthenticationUrlOutput
+ if *v == nil {
+ sv = &GetAuthenticationUrlOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "AuthenticationUrl":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected AuthenticationUrl to be of type string, got %T instead", value)
+ }
+ sv.AuthenticationUrl = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
type awsRestjson1_deserializeOpGetTranscript struct {
}
diff --git a/service/connectparticipant/doc.go b/service/connectparticipant/doc.go
index a1ff0ded48b..9a7b3ff693d 100644
--- a/service/connectparticipant/doc.go
+++ b/service/connectparticipant/doc.go
@@ -3,6 +3,10 @@
// Package connectparticipant provides the API client, operations, and parameter
// types for Amazon Connect Participant Service.
//
+// [Participant Service actions]
+//
+// [Participant Service data types]
+//
// Amazon Connect is an easy-to-use omnichannel cloud contact center service that
// enables companies of any size to deliver superior customer service at a lower
// cost. Amazon Connect communications capabilities make it easy for companies to
@@ -13,4 +17,7 @@
// within a chat contact. The APIs in the service enable the following: sending
// chat messages, attachment sharing, managing a participant's connection state and
// message events, and retrieving chat transcripts.
+//
+// [Participant Service data types]: https://docs.aws.amazon.com/connect/latest/APIReference/API_Types_Amazon_Connect_Participant_Service.html
+// [Participant Service actions]: https://docs.aws.amazon.com/connect/latest/APIReference/API_Operations_Amazon_Connect_Participant_Service.html
package connectparticipant
diff --git a/service/connectparticipant/generated.json b/service/connectparticipant/generated.json
index 9fda50a46d0..684f55ea870 100644
--- a/service/connectparticipant/generated.json
+++ b/service/connectparticipant/generated.json
@@ -8,11 +8,13 @@
"files": [
"api_client.go",
"api_client_test.go",
+ "api_op_CancelParticipantAuthentication.go",
"api_op_CompleteAttachmentUpload.go",
"api_op_CreateParticipantConnection.go",
"api_op_DescribeView.go",
"api_op_DisconnectParticipant.go",
"api_op_GetAttachment.go",
+ "api_op_GetAuthenticationUrl.go",
"api_op_GetTranscript.go",
"api_op_SendEvent.go",
"api_op_SendMessage.go",
diff --git a/service/connectparticipant/go_module_metadata.go b/service/connectparticipant/go_module_metadata.go
index 0d34f6029ae..8258756b900 100644
--- a/service/connectparticipant/go_module_metadata.go
+++ b/service/connectparticipant/go_module_metadata.go
@@ -3,4 +3,4 @@
package connectparticipant
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.27.7"
+const goModuleVersion = "1.28.0"
diff --git a/service/connectparticipant/serializers.go b/service/connectparticipant/serializers.go
index 3d73a3a65ec..6c6af439383 100644
--- a/service/connectparticipant/serializers.go
+++ b/service/connectparticipant/serializers.go
@@ -15,6 +15,96 @@ import (
smithyhttp "github.com/aws/smithy-go/transport/http"
)
+type awsRestjson1_serializeOpCancelParticipantAuthentication struct {
+}
+
+func (*awsRestjson1_serializeOpCancelParticipantAuthentication) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpCancelParticipantAuthentication) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*CancelParticipantAuthenticationInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/participant/cancel-authentication")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "POST"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsCancelParticipantAuthenticationInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ restEncoder.SetHeader("Content-Type").String("application/json")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsRestjson1_serializeOpDocumentCancelParticipantAuthenticationInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsCancelParticipantAuthenticationInput(v *CancelParticipantAuthenticationInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.ConnectionToken != nil {
+ locationName := "X-Amz-Bearer"
+ encoder.SetHeader(locationName).String(*v.ConnectionToken)
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeOpDocumentCancelParticipantAuthenticationInput(v *CancelParticipantAuthenticationInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.SessionId != nil {
+ ok := object.Key("SessionId")
+ ok.String(*v.SessionId)
+ }
+
+ return nil
+}
+
type awsRestjson1_serializeOpCompleteAttachmentUpload struct {
}
@@ -462,6 +552,106 @@ func awsRestjson1_serializeOpDocumentGetAttachmentInput(v *GetAttachmentInput, v
ok.String(*v.AttachmentId)
}
+ if v.UrlExpiryInSeconds != nil {
+ ok := object.Key("UrlExpiryInSeconds")
+ ok.Integer(*v.UrlExpiryInSeconds)
+ }
+
+ return nil
+}
+
+type awsRestjson1_serializeOpGetAuthenticationUrl struct {
+}
+
+func (*awsRestjson1_serializeOpGetAuthenticationUrl) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpGetAuthenticationUrl) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*GetAuthenticationUrlInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/participant/authentication-url")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "POST"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsGetAuthenticationUrlInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ restEncoder.SetHeader("Content-Type").String("application/json")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsRestjson1_serializeOpDocumentGetAuthenticationUrlInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsGetAuthenticationUrlInput(v *GetAuthenticationUrlInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.ConnectionToken != nil {
+ locationName := "X-Amz-Bearer"
+ encoder.SetHeader(locationName).String(*v.ConnectionToken)
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeOpDocumentGetAuthenticationUrlInput(v *GetAuthenticationUrlInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.RedirectUri != nil {
+ ok := object.Key("RedirectUri")
+ ok.String(*v.RedirectUri)
+ }
+
+ if v.SessionId != nil {
+ ok := object.Key("SessionId")
+ ok.String(*v.SessionId)
+ }
+
return nil
}
diff --git a/service/connectparticipant/snapshot/api_op_CancelParticipantAuthentication.go.snap b/service/connectparticipant/snapshot/api_op_CancelParticipantAuthentication.go.snap
new file mode 100644
index 00000000000..d8149d69007
--- /dev/null
+++ b/service/connectparticipant/snapshot/api_op_CancelParticipantAuthentication.go.snap
@@ -0,0 +1,41 @@
+CancelParticipantAuthentication
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/connectparticipant/snapshot/api_op_GetAuthenticationUrl.go.snap b/service/connectparticipant/snapshot/api_op_GetAuthenticationUrl.go.snap
new file mode 100644
index 00000000000..6795aeaf601
--- /dev/null
+++ b/service/connectparticipant/snapshot/api_op_GetAuthenticationUrl.go.snap
@@ -0,0 +1,41 @@
+GetAuthenticationUrl
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/connectparticipant/snapshot_test.go b/service/connectparticipant/snapshot_test.go
index 43801bcd67e..83d1ee7e081 100644
--- a/service/connectparticipant/snapshot_test.go
+++ b/service/connectparticipant/snapshot_test.go
@@ -62,6 +62,18 @@ func testSnapshot(stack *middleware.Stack, operation string) error {
}
return snapshotOK{}
}
+func TestCheckSnapshot_CancelParticipantAuthentication(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.CancelParticipantAuthentication(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "CancelParticipantAuthentication")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_CompleteAttachmentUpload(t *testing.T) {
svc := New(Options{})
_, err := svc.CompleteAttachmentUpload(context.Background(), nil, func(o *Options) {
@@ -122,6 +134,18 @@ func TestCheckSnapshot_GetAttachment(t *testing.T) {
}
}
+func TestCheckSnapshot_GetAuthenticationUrl(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetAuthenticationUrl(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "GetAuthenticationUrl")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_GetTranscript(t *testing.T) {
svc := New(Options{})
_, err := svc.GetTranscript(context.Background(), nil, func(o *Options) {
@@ -169,6 +193,18 @@ func TestCheckSnapshot_StartAttachmentUpload(t *testing.T) {
t.Fatal(err)
}
}
+func TestUpdateSnapshot_CancelParticipantAuthentication(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.CancelParticipantAuthentication(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "CancelParticipantAuthentication")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_CompleteAttachmentUpload(t *testing.T) {
svc := New(Options{})
_, err := svc.CompleteAttachmentUpload(context.Background(), nil, func(o *Options) {
@@ -229,6 +265,18 @@ func TestUpdateSnapshot_GetAttachment(t *testing.T) {
}
}
+func TestUpdateSnapshot_GetAuthenticationUrl(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetAuthenticationUrl(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "GetAuthenticationUrl")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_GetTranscript(t *testing.T) {
svc := New(Options{})
_, err := svc.GetTranscript(context.Background(), nil, func(o *Options) {
diff --git a/service/connectparticipant/validators.go b/service/connectparticipant/validators.go
index b6ccd07f487..4fb21313ba9 100644
--- a/service/connectparticipant/validators.go
+++ b/service/connectparticipant/validators.go
@@ -9,6 +9,26 @@ import (
"github.com/aws/smithy-go/middleware"
)
+type validateOpCancelParticipantAuthentication struct {
+}
+
+func (*validateOpCancelParticipantAuthentication) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpCancelParticipantAuthentication) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*CancelParticipantAuthenticationInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpCancelParticipantAuthenticationInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpCompleteAttachmentUpload struct {
}
@@ -109,6 +129,26 @@ func (m *validateOpGetAttachment) HandleInitialize(ctx context.Context, in middl
return next.HandleInitialize(ctx, in)
}
+type validateOpGetAuthenticationUrl struct {
+}
+
+func (*validateOpGetAuthenticationUrl) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpGetAuthenticationUrl) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*GetAuthenticationUrlInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpGetAuthenticationUrlInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpGetTranscript struct {
}
@@ -189,6 +229,10 @@ func (m *validateOpStartAttachmentUpload) HandleInitialize(ctx context.Context,
return next.HandleInitialize(ctx, in)
}
+func addOpCancelParticipantAuthenticationValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpCancelParticipantAuthentication{}, middleware.After)
+}
+
func addOpCompleteAttachmentUploadValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpCompleteAttachmentUpload{}, middleware.After)
}
@@ -209,6 +253,10 @@ func addOpGetAttachmentValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpGetAttachment{}, middleware.After)
}
+func addOpGetAuthenticationUrlValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpGetAuthenticationUrl{}, middleware.After)
+}
+
func addOpGetTranscriptValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpGetTranscript{}, middleware.After)
}
@@ -225,6 +273,24 @@ func addOpStartAttachmentUploadValidationMiddleware(stack *middleware.Stack) err
return stack.Initialize.Add(&validateOpStartAttachmentUpload{}, middleware.After)
}
+func validateOpCancelParticipantAuthenticationInput(v *CancelParticipantAuthenticationInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "CancelParticipantAuthenticationInput"}
+ if v.SessionId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("SessionId"))
+ }
+ if v.ConnectionToken == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ConnectionToken"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpCompleteAttachmentUploadInput(v *CompleteAttachmentUploadInput) error {
if v == nil {
return nil
@@ -312,6 +378,27 @@ func validateOpGetAttachmentInput(v *GetAttachmentInput) error {
}
}
+func validateOpGetAuthenticationUrlInput(v *GetAuthenticationUrlInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "GetAuthenticationUrlInput"}
+ if v.SessionId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("SessionId"))
+ }
+ if v.RedirectUri == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("RedirectUri"))
+ }
+ if v.ConnectionToken == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ConnectionToken"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpGetTranscriptInput(v *GetTranscriptInput) error {
if v == nil {
return nil
diff --git a/service/databasemigrationservice/CHANGELOG.md b/service/databasemigrationservice/CHANGELOG.md
index f68e07b1682..eb8ec8f9fc6 100644
--- a/service/databasemigrationservice/CHANGELOG.md
+++ b/service/databasemigrationservice/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.45.0 (2024-12-12)
+
+* **Feature**: Add parameters to support for kerberos authentication. Add parameter for disabling the Unicode source filter with PostgreSQL settings. Add parameter to use large integer value with Kinesis/Kafka settings.
+
# v1.44.5 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/databasemigrationservice/api_op_CreateReplicationInstance.go b/service/databasemigrationservice/api_op_CreateReplicationInstance.go
index f890dd7aaaa..18ab0ac90ff 100644
--- a/service/databasemigrationservice/api_op_CreateReplicationInstance.go
+++ b/service/databasemigrationservice/api_op_CreateReplicationInstance.go
@@ -99,6 +99,10 @@ type CreateReplicationInstanceInput struct {
// created, the default is the latest engine version available.
EngineVersion *string
+ // Specifies the ID of the secret that stores the key cache file required for
+ // kerberos authentication, when creating a replication instance.
+ KerberosAuthenticationSettings *types.KerberosAuthenticationSettings
+
// An KMS key identifier that is used to encrypt the data on the replication
// instance.
//
diff --git a/service/databasemigrationservice/api_op_DescribeDataProviders.go b/service/databasemigrationservice/api_op_DescribeDataProviders.go
index d989f13c893..a3d3b3ade39 100644
--- a/service/databasemigrationservice/api_op_DescribeDataProviders.go
+++ b/service/databasemigrationservice/api_op_DescribeDataProviders.go
@@ -32,7 +32,8 @@ type DescribeDataProvidersInput struct {
// Filters applied to the data providers described in the form of key-value pairs.
//
- // Valid filter names: data-provider-identifier
+ // Valid filter names and values: data-provider-identifier, data provider arn or
+ // name
Filters []types.Filter
// Specifies the unique pagination token that makes it possible to display the
diff --git a/service/databasemigrationservice/api_op_DescribeInstanceProfiles.go b/service/databasemigrationservice/api_op_DescribeInstanceProfiles.go
index 2d77e4ddad8..c4f7243f767 100644
--- a/service/databasemigrationservice/api_op_DescribeInstanceProfiles.go
+++ b/service/databasemigrationservice/api_op_DescribeInstanceProfiles.go
@@ -32,6 +32,9 @@ type DescribeInstanceProfilesInput struct {
// Filters applied to the instance profiles described in the form of key-value
// pairs.
+ //
+ // Valid filter names and values: instance-profile-identifier, instance profile
+ // arn or name
Filters []types.Filter
// Specifies the unique pagination token that makes it possible to display the
diff --git a/service/databasemigrationservice/api_op_DescribeMigrationProjects.go b/service/databasemigrationservice/api_op_DescribeMigrationProjects.go
index 7d773364b67..3af4da3a971 100644
--- a/service/databasemigrationservice/api_op_DescribeMigrationProjects.go
+++ b/service/databasemigrationservice/api_op_DescribeMigrationProjects.go
@@ -32,6 +32,14 @@ type DescribeMigrationProjectsInput struct {
// Filters applied to the migration projects described in the form of key-value
// pairs.
+ //
+ // Valid filter names and values:
+ //
+ // - instance-profile-identifier, instance profile arn or name
+ //
+ // - data-provider-identifier, data provider arn or name
+ //
+ // - migration-project-identifier, migration project arn or name
Filters []types.Filter
// Specifies the unique pagination token that makes it possible to display the
diff --git a/service/databasemigrationservice/api_op_ModifyReplicationInstance.go b/service/databasemigrationservice/api_op_ModifyReplicationInstance.go
index e7a2dc4fa3c..a1b5e6b1ea3 100644
--- a/service/databasemigrationservice/api_op_ModifyReplicationInstance.go
+++ b/service/databasemigrationservice/api_op_ModifyReplicationInstance.go
@@ -75,6 +75,10 @@ type ModifyReplicationInstanceInput struct {
// AllowMajorVersionUpgrade to true .
EngineVersion *string
+ // Specifies the ID of the secret that stores the key cache file required for
+ // kerberos authentication, when modifying a replication instance.
+ KerberosAuthenticationSettings *types.KerberosAuthenticationSettings
+
// Specifies whether the replication instance is a Multi-AZ deployment. You can't
// set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
MultiAZ *bool
diff --git a/service/databasemigrationservice/api_op_StartReplication.go b/service/databasemigrationservice/api_op_StartReplication.go
index d6a9f783827..e715cb4d3d3 100644
--- a/service/databasemigrationservice/api_op_StartReplication.go
+++ b/service/databasemigrationservice/api_op_StartReplication.go
@@ -41,6 +41,21 @@ type StartReplicationInput struct {
// The replication type.
//
+ // When the replication type is full-load or full-load-and-cdc , the only valid
+ // value for the first run of the replication is start-replication . This option
+ // will start the replication.
+ //
+ // You can also use ReloadTables to reload specific tables that failed during replication
+ // instead of restarting the replication.
+ //
+ // The resume-processing option isn't applicable for a full-load replication,
+ // because you can't resume partially loaded tables during the full load phase.
+ //
+ // For a full-load-and-cdc replication, DMS migrates table data, and then applies
+ // data changes that occur on the source. To load all the tables again, and start
+ // capturing source changes, use reload-target . Otherwise use resume-processing ,
+ // to replicate the changes from the last stop position.
+ //
// This member is required.
StartReplicationType *string
diff --git a/service/databasemigrationservice/deserializers.go b/service/databasemigrationservice/deserializers.go
index eb5f90a7828..52eac58b495 100644
--- a/service/databasemigrationservice/deserializers.go
+++ b/service/databasemigrationservice/deserializers.go
@@ -2590,6 +2590,9 @@ func awsAwsjson11_deserializeOpErrorDeleteEventSubscription(response *smithyhttp
errorMessage = bodyInfo.Message
}
switch {
+ case strings.EqualFold("AccessDeniedFault", errorCode):
+ return awsAwsjson11_deserializeErrorAccessDeniedFault(response, errorBody)
+
case strings.EqualFold("InvalidResourceStateFault", errorCode):
return awsAwsjson11_deserializeErrorInvalidResourceStateFault(response, errorBody)
@@ -3387,6 +3390,9 @@ func awsAwsjson11_deserializeOpErrorDeleteReplicationSubnetGroup(response *smith
errorMessage = bodyInfo.Message
}
switch {
+ case strings.EqualFold("AccessDeniedFault", errorCode):
+ return awsAwsjson11_deserializeErrorAccessDeniedFault(response, errorBody)
+
case strings.EqualFold("InvalidResourceStateFault", errorCode):
return awsAwsjson11_deserializeErrorInvalidResourceStateFault(response, errorBody)
@@ -8529,6 +8535,9 @@ func awsAwsjson11_deserializeOpErrorDescribeTableStatistics(response *smithyhttp
errorMessage = bodyInfo.Message
}
switch {
+ case strings.EqualFold("AccessDeniedFault", errorCode):
+ return awsAwsjson11_deserializeErrorAccessDeniedFault(response, errorBody)
+
case strings.EqualFold("InvalidResourceStateFault", errorCode):
return awsAwsjson11_deserializeErrorInvalidResourceStateFault(response, errorBody)
@@ -9459,6 +9468,9 @@ func awsAwsjson11_deserializeOpErrorModifyEventSubscription(response *smithyhttp
errorMessage = bodyInfo.Message
}
switch {
+ case strings.EqualFold("AccessDeniedFault", errorCode):
+ return awsAwsjson11_deserializeErrorAccessDeniedFault(response, errorBody)
+
case strings.EqualFold("KMSAccessDeniedFault", errorCode):
return awsAwsjson11_deserializeErrorKMSAccessDeniedFault(response, errorBody)
@@ -19191,6 +19203,73 @@ func awsAwsjson11_deserializeDocumentKafkaSettings(v **types.KafkaSettings, valu
sv.Topic = ptr.String(jtv)
}
+ case "UseLargeIntegerValue":
+ if value != nil {
+ jtv, ok := value.(bool)
+ if !ok {
+ return fmt.Errorf("expected BooleanOptional to be of type *bool, got %T instead", value)
+ }
+ sv.UseLargeIntegerValue = ptr.Bool(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsAwsjson11_deserializeDocumentKerberosAuthenticationSettings(v **types.KerberosAuthenticationSettings, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.KerberosAuthenticationSettings
+ if *v == nil {
+ sv = &types.KerberosAuthenticationSettings{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "KeyCacheSecretIamArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.KeyCacheSecretIamArn = ptr.String(jtv)
+ }
+
+ case "KeyCacheSecretId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.KeyCacheSecretId = ptr.String(jtv)
+ }
+
+ case "Krb5FileContents":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ }
+ sv.Krb5FileContents = ptr.String(jtv)
+ }
+
default:
_, _ = key, value
@@ -19312,6 +19391,15 @@ func awsAwsjson11_deserializeDocumentKinesisSettings(v **types.KinesisSettings,
sv.StreamArn = ptr.String(jtv)
}
+ case "UseLargeIntegerValue":
+ if value != nil {
+ jtv, ok := value.(bool)
+ if !ok {
+ return fmt.Errorf("expected BooleanOptional to be of type *bool, got %T instead", value)
+ }
+ sv.UseLargeIntegerValue = ptr.Bool(jtv)
+ }
+
default:
_, _ = key, value
@@ -19893,6 +19981,15 @@ func awsAwsjson11_deserializeDocumentMicrosoftSQLServerSettings(v **types.Micros
for key, value := range shape {
switch key {
+ case "AuthenticationMethod":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected SqlServerAuthenticationMethod to be of type string, got %T instead", value)
+ }
+ sv.AuthenticationMethod = types.SqlServerAuthenticationMethod(jtv)
+ }
+
case "BcpPacketSize":
if value != nil {
jtv, ok := value.(json.Number)
@@ -21095,6 +21192,15 @@ func awsAwsjson11_deserializeDocumentOracleSettings(v **types.OracleSettings, va
sv.AsmUser = ptr.String(jtv)
}
+ case "AuthenticationMethod":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected OracleAuthenticationMethod to be of type string, got %T instead", value)
+ }
+ sv.AuthenticationMethod = types.OracleAuthenticationMethod(jtv)
+ }
+
case "CharLengthSemantics":
if value != nil {
jtv, ok := value.(string)
@@ -21535,7 +21641,7 @@ func awsAwsjson11_deserializeDocumentOrderableReplicationInstance(v **types.Orde
if value != nil {
jtv, ok := value.(string)
if !ok {
- return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ return fmt.Errorf("expected ReplicationInstanceClass to be of type string, got %T instead", value)
}
sv.ReplicationInstanceClass = ptr.String(jtv)
}
@@ -21922,6 +22028,15 @@ func awsAwsjson11_deserializeDocumentPostgreSQLSettings(v **types.PostgreSQLSett
sv.DdlArtifactsSchema = ptr.String(jtv)
}
+ case "DisableUnicodeSourceFilter":
+ if value != nil {
+ jtv, ok := value.(bool)
+ if !ok {
+ return fmt.Errorf("expected BooleanOptional to be of type *bool, got %T instead", value)
+ }
+ sv.DisableUnicodeSourceFilter = ptr.Bool(jtv)
+ }
+
case "ExecuteTimeout":
if value != nil {
jtv, ok := value.(json.Number)
@@ -23895,6 +24010,11 @@ func awsAwsjson11_deserializeDocumentReplicationInstance(v **types.ReplicationIn
}
}
+ case "KerberosAuthenticationSettings":
+ if err := awsAwsjson11_deserializeDocumentKerberosAuthenticationSettings(&sv.KerberosAuthenticationSettings, value); err != nil {
+ return err
+ }
+
case "KmsKeyId":
if value != nil {
jtv, ok := value.(string)
@@ -23958,7 +24078,7 @@ func awsAwsjson11_deserializeDocumentReplicationInstance(v **types.ReplicationIn
if value != nil {
jtv, ok := value.(string)
if !ok {
- return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ return fmt.Errorf("expected ReplicationInstanceClass to be of type string, got %T instead", value)
}
sv.ReplicationInstanceClass = ptr.String(jtv)
}
@@ -24380,7 +24500,7 @@ func awsAwsjson11_deserializeDocumentReplicationPendingModifiedValues(v **types.
if value != nil {
jtv, ok := value.(string)
if !ok {
- return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ return fmt.Errorf("expected ReplicationInstanceClass to be of type string, got %T instead", value)
}
sv.ReplicationInstanceClass = ptr.String(jtv)
}
@@ -25067,7 +25187,7 @@ func awsAwsjson11_deserializeDocumentReplicationTaskAssessmentResult(v **types.R
if value != nil {
jtv, ok := value.(string)
if !ok {
- return fmt.Errorf("expected String to be of type string, got %T instead", value)
+ return fmt.Errorf("expected SecretString to be of type string, got %T instead", value)
}
sv.S3ObjectUrl = ptr.String(jtv)
}
diff --git a/service/databasemigrationservice/go_module_metadata.go b/service/databasemigrationservice/go_module_metadata.go
index 9f158701906..fd78fb2b296 100644
--- a/service/databasemigrationservice/go_module_metadata.go
+++ b/service/databasemigrationservice/go_module_metadata.go
@@ -3,4 +3,4 @@
package databasemigrationservice
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.44.5"
+const goModuleVersion = "1.45.0"
diff --git a/service/databasemigrationservice/serializers.go b/service/databasemigrationservice/serializers.go
index 4a4554ccda6..a671e9c2a1a 100644
--- a/service/databasemigrationservice/serializers.go
+++ b/service/databasemigrationservice/serializers.go
@@ -7525,6 +7525,33 @@ func awsAwsjson11_serializeDocumentKafkaSettings(v *types.KafkaSettings, value s
ok.String(*v.Topic)
}
+ if v.UseLargeIntegerValue != nil {
+ ok := object.Key("UseLargeIntegerValue")
+ ok.Boolean(*v.UseLargeIntegerValue)
+ }
+
+ return nil
+}
+
+func awsAwsjson11_serializeDocumentKerberosAuthenticationSettings(v *types.KerberosAuthenticationSettings, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.KeyCacheSecretIamArn != nil {
+ ok := object.Key("KeyCacheSecretIamArn")
+ ok.String(*v.KeyCacheSecretIamArn)
+ }
+
+ if v.KeyCacheSecretId != nil {
+ ok := object.Key("KeyCacheSecretId")
+ ok.String(*v.KeyCacheSecretId)
+ }
+
+ if v.Krb5FileContents != nil {
+ ok := object.Key("Krb5FileContents")
+ ok.String(*v.Krb5FileContents)
+ }
+
return nil
}
@@ -7593,6 +7620,11 @@ func awsAwsjson11_serializeDocumentKinesisSettings(v *types.KinesisSettings, val
ok.String(*v.StreamArn)
}
+ if v.UseLargeIntegerValue != nil {
+ ok := object.Key("UseLargeIntegerValue")
+ ok.Boolean(*v.UseLargeIntegerValue)
+ }
+
return nil
}
@@ -7659,6 +7691,11 @@ func awsAwsjson11_serializeDocumentMicrosoftSQLServerSettings(v *types.Microsoft
object := value.Object()
defer object.Close()
+ if len(v.AuthenticationMethod) > 0 {
+ ok := object.Key("AuthenticationMethod")
+ ok.String(string(v.AuthenticationMethod))
+ }
+
if v.BcpPacketSize != nil {
ok := object.Key("BcpPacketSize")
ok.Integer(*v.BcpPacketSize)
@@ -8138,6 +8175,11 @@ func awsAwsjson11_serializeDocumentOracleSettings(v *types.OracleSettings, value
ok.String(*v.AsmUser)
}
+ if len(v.AuthenticationMethod) > 0 {
+ ok := object.Key("AuthenticationMethod")
+ ok.String(string(v.AuthenticationMethod))
+ }
+
if len(v.CharLengthSemantics) > 0 {
ok := object.Key("CharLengthSemantics")
ok.String(string(v.CharLengthSemantics))
@@ -8379,6 +8421,11 @@ func awsAwsjson11_serializeDocumentPostgreSQLSettings(v *types.PostgreSQLSetting
ok.String(*v.DdlArtifactsSchema)
}
+ if v.DisableUnicodeSourceFilter != nil {
+ ok := object.Key("DisableUnicodeSourceFilter")
+ ok.Boolean(*v.DisableUnicodeSourceFilter)
+ }
+
if v.ExecuteTimeout != nil {
ok := object.Key("ExecuteTimeout")
ok.Integer(*v.ExecuteTimeout)
@@ -9856,6 +9903,13 @@ func awsAwsjson11_serializeOpDocumentCreateReplicationInstanceInput(v *CreateRep
ok.String(*v.EngineVersion)
}
+ if v.KerberosAuthenticationSettings != nil {
+ ok := object.Key("KerberosAuthenticationSettings")
+ if err := awsAwsjson11_serializeDocumentKerberosAuthenticationSettings(v.KerberosAuthenticationSettings, ok); err != nil {
+ return err
+ }
+ }
+
if v.KmsKeyId != nil {
ok := object.Key("KmsKeyId")
ok.String(*v.KmsKeyId)
@@ -11927,6 +11981,13 @@ func awsAwsjson11_serializeOpDocumentModifyReplicationInstanceInput(v *ModifyRep
ok.String(*v.EngineVersion)
}
+ if v.KerberosAuthenticationSettings != nil {
+ ok := object.Key("KerberosAuthenticationSettings")
+ if err := awsAwsjson11_serializeDocumentKerberosAuthenticationSettings(v.KerberosAuthenticationSettings, ok); err != nil {
+ return err
+ }
+ }
+
if v.MultiAZ != nil {
ok := object.Key("MultiAZ")
ok.Boolean(*v.MultiAZ)
diff --git a/service/databasemigrationservice/types/enums.go b/service/databasemigrationservice/types/enums.go
index 8b50793b25e..f300eed3e13 100644
--- a/service/databasemigrationservice/types/enums.go
+++ b/service/databasemigrationservice/types/enums.go
@@ -465,6 +465,25 @@ func (NestingLevelValue) Values() []NestingLevelValue {
}
}
+type OracleAuthenticationMethod string
+
+// Enum values for OracleAuthenticationMethod
+const (
+ OracleAuthenticationMethodPassword OracleAuthenticationMethod = "password"
+ OracleAuthenticationMethodKerberos OracleAuthenticationMethod = "kerberos"
+)
+
+// Values returns all known values for OracleAuthenticationMethod. Note that this
+// can be expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (OracleAuthenticationMethod) Values() []OracleAuthenticationMethod {
+ return []OracleAuthenticationMethod{
+ "password",
+ "kerberos",
+ }
+}
+
type OriginTypeValue string
// Enum values for OriginTypeValue
@@ -663,6 +682,26 @@ func (SourceType) Values() []SourceType {
}
}
+type SqlServerAuthenticationMethod string
+
+// Enum values for SqlServerAuthenticationMethod
+const (
+ SqlServerAuthenticationMethodPassword SqlServerAuthenticationMethod = "password"
+ SqlServerAuthenticationMethodKerberos SqlServerAuthenticationMethod = "kerberos"
+)
+
+// Values returns all known values for SqlServerAuthenticationMethod. Note that
+// this can be expanded in the future, and so it is only as up to date as the
+// client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (SqlServerAuthenticationMethod) Values() []SqlServerAuthenticationMethod {
+ return []SqlServerAuthenticationMethod{
+ "password",
+ "kerberos",
+ }
+}
+
type SslSecurityProtocolValue string
// Enum values for SslSecurityProtocolValue
diff --git a/service/databasemigrationservice/types/types.go b/service/databasemigrationservice/types/types.go
index 46f6e24aadf..7890a1a2cf5 100644
--- a/service/databasemigrationservice/types/types.go
+++ b/service/databasemigrationservice/types/types.go
@@ -1570,6 +1570,27 @@ type KafkaSettings struct {
// specifies "kafka-default-topic" as the migration topic.
Topic *string
+ // Specifies using the large integer value with Kafka.
+ UseLargeIntegerValue *bool
+
+ noSmithyDocumentSerde
+}
+
+// Specifies using Kerberos authentication settings for use with DMS.
+type KerberosAuthenticationSettings struct {
+
+ // Specifies the Amazon Resource Name (ARN) of the IAM role that grants Amazon Web
+ // Services DMS access to the secret containing key cache file for the replication
+ // instance.
+ KeyCacheSecretIamArn *string
+
+ // Specifies the secret ID of the key cache for the replication instance.
+ KeyCacheSecretId *string
+
+ // Specifies the ID of the secret that stores the key cache file required for
+ // kerberos authentication of the replication instance.
+ Krb5FileContents *string
+
noSmithyDocumentSerde
}
@@ -1627,6 +1648,9 @@ type KinesisSettings struct {
// The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
StreamArn *string
+ // Specifies using the large integer value with Kinesis.
+ UseLargeIntegerValue *bool
+
noSmithyDocumentSerde
}
@@ -1714,6 +1738,9 @@ type MicrosoftSqlServerDataProviderSettings struct {
// Provides information that defines a Microsoft SQL Server endpoint.
type MicrosoftSQLServerSettings struct {
+ // Specifies using Kerberos authentication with Microsoft SQL Server.
+ AuthenticationMethod SqlServerAuthenticationMethod
+
// The maximum size of the packets (in bytes) used to transfer data using BCP.
BcpPacketSize *int32
@@ -2262,9 +2289,9 @@ type OracleSettings struct {
// from the outset.
ArchivedLogDestId *int32
- // When this field is set to Y , DMS only accesses the archived redo logs. If the
- // archived redo logs are stored on Automatic Storage Management (ASM) only, the
- // DMS user account needs to be granted ASM privileges.
+ // When this field is set to True , DMS only accesses the archived redo logs. If
+ // the archived redo logs are stored on Automatic Storage Management (ASM) only,
+ // the DMS user account needs to be granted ASM privileges.
ArchivedLogsOnly *bool
// For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM)
@@ -2292,6 +2319,9 @@ type OracleSettings struct {
// [Configuration for change data capture (CDC) on an Oracle source database]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC.Configuration
AsmUser *string
+ // Specifies using Kerberos authentication with Oracle.
+ AuthenticationMethod OracleAuthenticationMethod
+
// Specifies whether the length of a character column is in bytes or in
// characters. To indicate that the character column length is in characters, set
// this attribute to CHAR . Otherwise, the character column length is in bytes.
@@ -2361,8 +2391,7 @@ type OracleSettings struct {
//
// You can specify an integer value between 0 (the default) and 240 (the maximum).
//
- // This parameter is only valid in DMS version 3.5.0 and later. DMS supports a
- // window of up to 9.5 hours including the value for OpenTransactionWindow .
+ // This parameter is only valid in DMS version 3.5.0 and later.
OpenTransactionWindow *int32
// Set this string attribute to the required value in order to use the Binary
@@ -2500,26 +2529,26 @@ type OracleSettings struct {
// use any specified prefix replacement to access all online redo logs.
UseAlternateFolderForOnline *bool
- // Set this attribute to Y to capture change data using the Binary Reader utility.
- // Set UseLogminerReader to N to set this attribute to Y. To use Binary Reader
- // with Amazon RDS for Oracle as the source, you set additional attributes. For
- // more information about using this setting with Oracle Automatic Storage
- // Management (ASM), see [Using Oracle LogMiner or DMS Binary Reader for CDC].
+ // Set this attribute to True to capture change data using the Binary Reader
+ // utility. Set UseLogminerReader to False to set this attribute to True. To use
+ // Binary Reader with Amazon RDS for Oracle as the source, you set additional
+ // attributes. For more information about using this setting with Oracle Automatic
+ // Storage Management (ASM), see [Using Oracle LogMiner or DMS Binary Reader for CDC].
//
// [Using Oracle LogMiner or DMS Binary Reader for CDC]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC
UseBFile *bool
- // Set this attribute to Y to have DMS use a direct path full load. Specify this
- // value to use the direct path protocol in the Oracle Call Interface (OCI). By
- // using this OCI protocol, you can bulk-load Oracle target tables during a full
+ // Set this attribute to True to have DMS use a direct path full load. Specify
+ // this value to use the direct path protocol in the Oracle Call Interface (OCI).
+ // By using this OCI protocol, you can bulk-load Oracle target tables during a full
// load.
UseDirectPathFullLoad *bool
- // Set this attribute to Y to capture change data using the Oracle LogMiner
- // utility (the default). Set this attribute to N if you want to access the redo
- // logs as a binary file. When you set UseLogminerReader to N, also set UseBfile
- // to Y. For more information on this setting and using Oracle ASM, see [Using Oracle LogMiner or DMS Binary Reader for CDC]in the DMS
- // User Guide.
+ // Set this attribute to True to capture change data using the Oracle LogMiner
+ // utility (the default). Set this attribute to False if you want to access the
+ // redo logs as a binary file. When you set UseLogminerReader to False, also set
+ // UseBfile to True. For more information on this setting and using Oracle ASM, see [Using Oracle LogMiner or DMS Binary Reader for CDC]
+ // in the DMS User Guide.
//
// [Using Oracle LogMiner or DMS Binary Reader for CDC]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC
UseLogminerReader *bool
@@ -2660,6 +2689,8 @@ type PostgreSQLSettings struct {
// To capture DDL events, DMS creates various artifacts in the PostgreSQL database
// when the task starts. You can later remove these artifacts.
//
+ // The default value is true .
+ //
// If this value is set to N , you don't have to create tables or triggers on the
// source database.
CaptureDdls *bool
@@ -2674,9 +2705,20 @@ type PostgreSQLSettings struct {
// The schema in which the operational DDL database artifacts are created.
//
+ // The default value is public .
+ //
// Example: ddlArtifactsSchema=xyzddlschema;
DdlArtifactsSchema *string
+ // Disables the Unicode source filter with PostgreSQL, for values passed into the
+ // Selection rule filter on Source Endpoint column values. By default DMS performs
+ // source filter comparisons using a Unicode string which can cause look ups to
+ // ignore the indexes in the text columns and slow down migrations.
+ //
+ // Unicode support should only be disabled when using a selection rule filter is
+ // on a text column in the Source database that is indexed.
+ DisableUnicodeSourceFilter *bool
+
// Sets the client statement timeout for the PostgreSQL instance, in seconds. The
// default value is 60 seconds.
//
@@ -2686,6 +2728,8 @@ type PostgreSQLSettings struct {
// When set to true , this value causes a task to fail if the actual size of a LOB
// column is greater than the specified LobMaxSize .
//
+ // The default value is false .
+ //
// If task is set to Limited LOB mode and this option is set to true, the task
// fails instead of truncating the LOB data.
FailTasksOnLobTruncation *bool
@@ -2694,28 +2738,42 @@ type PostgreSQLSettings struct {
// doing this, it prevents idle logical replication slots from holding onto old WAL
// logs, which can result in storage full situations on the source. This heartbeat
// keeps restart_lsn moving and prevents storage full scenarios.
+ //
+ // The default value is false .
HeartbeatEnable *bool
// Sets the WAL heartbeat frequency (in minutes).
+ //
+ // The default value is 5 minutes.
HeartbeatFrequency *int32
// Sets the schema in which the heartbeat artifacts are created.
+ //
+ // The default value is public .
HeartbeatSchema *string
// When true, lets PostgreSQL migrate the boolean type as boolean. By default,
// PostgreSQL migrates booleans as varchar(5) . You must set this setting on both
// the source and target endpoints for it to take effect.
+ //
+ // The default value is false .
MapBooleanAsBoolean *bool
// When true, DMS migrates JSONB values as CLOB.
+ //
+ // The default value is false .
MapJsonbAsClob *bool
- // When true, DMS migrates LONG values as VARCHAR.
+ // Sets what datatype to map LONG values as.
+ //
+ // The default value is wstring .
MapLongVarcharAs LongVarcharMappingType
// Specifies the maximum size (in KB) of any .csv file used to transfer data to
// PostgreSQL.
//
+ // The default value is 32,768 KB (32 MB).
+ //
// Example: maxFileSize=512
MaxFileSize *int32
@@ -2723,6 +2781,8 @@ type PostgreSQLSettings struct {
Password *string
// Specifies the plugin to use to create a replication slot.
+ //
+ // The default value is pglogical .
PluginName PluginNameValue
// Endpoint TCP port. The default is 5432.
@@ -3318,7 +3378,7 @@ type Replication struct {
// uses for its data source.
SourceEndpointArn *string
- // The replication type.
+ // The type of replication to start.
StartReplicationType *string
// The current status of the serverless replication.
@@ -3443,6 +3503,10 @@ type ReplicationInstance struct {
// The time the replication instance was created.
InstanceCreateTime *time.Time
+ // Specifies the ID of the secret that stores the key cache file required for
+ // kerberos authentication, when replicating an instance.
+ KerberosAuthenticationSettings *KerberosAuthenticationSettings
+
// An KMS key identifier that is used to encrypt the data on the replication
// instance.
//
@@ -3811,13 +3875,16 @@ type ReplicationTask struct {
// The reason the replication task was stopped. This response parameter can return
// one of the following values:
//
- // - "Stop Reason NORMAL"
+ // - "Stop Reason NORMAL" – The task completed successfully with no additional
+ // information returned.
//
// - "Stop Reason RECOVERABLE_ERROR"
//
// - "Stop Reason FATAL_ERROR"
//
- // - "Stop Reason FULL_LOAD_ONLY_FINISHED"
+ // - "Stop Reason FULL_LOAD_ONLY_FINISHED" – The task completed the full load
+ // phase. DMS applied cached changes if you set StopTaskCachedChangesApplied to
+ // true .
//
// - "Stop Reason STOPPED_AFTER_FULL_LOAD" – Full load completed, with cached
// changes not applied
@@ -3984,6 +4051,9 @@ type ReplicationTaskAssessmentRun struct {
//
// - "starting" – The assessment run is starting, but resources are not yet being
// provisioned for individual assessments.
+ //
+ // - "warning" – At least one individual assessment completed with a warning
+ // status.
Status *string
noSmithyDocumentSerde
diff --git a/service/datasync/CHANGELOG.md b/service/datasync/CHANGELOG.md
index 564e9c39438..980f21c812b 100644
--- a/service/datasync/CHANGELOG.md
+++ b/service/datasync/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.44.0 (2024-12-18)
+
+* **Feature**: AWS DataSync introduces the ability to update attributes for in-cloud locations.
+
# v1.43.5 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/datasync/api_op_CreateLocationEfs.go b/service/datasync/api_op_CreateLocationEfs.go
index 0e23a76f800..371082d18b0 100644
--- a/service/datasync/api_op_CreateLocationEfs.go
+++ b/service/datasync/api_op_CreateLocationEfs.go
@@ -72,8 +72,8 @@ type CreateLocationEfsInput struct {
InTransitEncryption types.EfsInTransitEncryption
// Specifies a mount path for your Amazon EFS file system. This is where DataSync
- // reads or writes data (depending on if this is a source or destination location)
- // on your file system.
+ // reads or writes data on your file system (depending on if this is a source or
+ // destination location).
//
// By default, DataSync uses the root directory (or [access point] if you provide one by using
// AccessPointArn ). You can also include subdirectories using forward slashes (for
diff --git a/service/datasync/api_op_CreateLocationFsxLustre.go b/service/datasync/api_op_CreateLocationFsxLustre.go
index 2e4b0786de6..6fed3dd033f 100644
--- a/service/datasync/api_op_CreateLocationFsxLustre.go
+++ b/service/datasync/api_op_CreateLocationFsxLustre.go
@@ -34,25 +34,34 @@ func (c *Client) CreateLocationFsxLustre(ctx context.Context, params *CreateLoca
type CreateLocationFsxLustreInput struct {
- // The Amazon Resource Name (ARN) for the FSx for Lustre file system.
+ // Specifies the Amazon Resource Name (ARN) of the FSx for Lustre file system.
//
// This member is required.
FsxFilesystemArn *string
- // The Amazon Resource Names (ARNs) of the security groups that are used to
- // configure the FSx for Lustre file system.
+ // Specifies the Amazon Resource Names (ARNs) of up to five security groups that
+ // provide access to your FSx for Lustre file system.
+ //
+ // The security groups must be able to access the file system's ports. The file
+ // system must also allow access from the security groups. For information about
+ // file system access, see the [Amazon FSx for Lustre User Guide].
+ //
+ // [Amazon FSx for Lustre User Guide]: https://docs.aws.amazon.com/fsx/latest/LustreGuide/limit-access-security-groups.html
//
// This member is required.
SecurityGroupArns []string
- // A subdirectory in the location's path. This subdirectory in the FSx for Lustre
- // file system is used to read data from the FSx for Lustre source location or
- // write data to the FSx for Lustre destination.
+ // Specifies a mount path for your FSx for Lustre file system. The path can
+ // include subdirectories.
+ //
+ // When the location is used as a source, DataSync reads data from the mount path.
+ // When the location is used as a destination, DataSync writes data to the mount
+ // path. If you don't include this parameter, DataSync uses the file system's root
+ // directory ( / ).
Subdirectory *string
- // The key-value pair that represents a tag that you want to add to the resource.
- // The value can be an empty string. This value helps you manage, filter, and
- // search for your resources. We recommend that you create a name tag for your
+ // Specifies labels that help you categorize, filter, and search for your Amazon
+ // Web Services resources. We recommend creating at least a name tag for your
// location.
Tags []types.TagListEntry
@@ -61,8 +70,8 @@ type CreateLocationFsxLustreInput struct {
type CreateLocationFsxLustreOutput struct {
- // The Amazon Resource Name (ARN) of the FSx for Lustre file system location
- // that's created.
+ // The Amazon Resource Name (ARN) of the FSx for Lustre file system location that
+ // you created.
LocationArn *string
// Metadata pertaining to the operation's result.
diff --git a/service/datasync/api_op_CreateLocationFsxOntap.go b/service/datasync/api_op_CreateLocationFsxOntap.go
index addd6a609b5..cd96e078739 100644
--- a/service/datasync/api_op_CreateLocationFsxOntap.go
+++ b/service/datasync/api_op_CreateLocationFsxOntap.go
@@ -62,7 +62,8 @@ type CreateLocationFsxOntapInput struct {
// This member is required.
StorageVirtualMachineArn *string
- // Specifies a path to the file share in the SVM where you'll copy your data.
+ // Specifies a path to the file share in the SVM where you want to transfer data
+ // to or from.
//
// You can specify a junction path (also known as a mount point), qtree path (for
// NFS file shares), or share name (for SMB file shares). For example, your mount
diff --git a/service/datasync/api_op_CreateLocationFsxWindows.go b/service/datasync/api_op_CreateLocationFsxWindows.go
index 0954238e04c..7f814ae256f 100644
--- a/service/datasync/api_op_CreateLocationFsxWindows.go
+++ b/service/datasync/api_op_CreateLocationFsxWindows.go
@@ -79,8 +79,8 @@ type CreateLocationFsxWindowsInput struct {
// This member is required.
User *string
- // Specifies the name of the Microsoft Active Directory domain that the FSx for
- // Windows File Server file system belongs to.
+ // Specifies the name of the Windows domain that the FSx for Windows File Server
+ // file system belongs to.
//
// If you have multiple Active Directory domains in your environment, configuring
// this parameter makes sure that DataSync connects to the right file system.
diff --git a/service/datasync/api_op_CreateLocationNfs.go b/service/datasync/api_op_CreateLocationNfs.go
index 40f5725a210..9d4b1bed7e3 100644
--- a/service/datasync/api_op_CreateLocationNfs.go
+++ b/service/datasync/api_op_CreateLocationNfs.go
@@ -16,10 +16,6 @@ import (
//
// Before you begin, make sure that you understand how DataSync [accesses NFS file servers].
//
-// If you're copying data to or from an Snowcone device, you can also use
-// CreateLocationNfs to create your transfer location. For more information, see [Configuring transfers with Snowcone].
-//
-// [Configuring transfers with Snowcone]: https://docs.aws.amazon.com/datasync/latest/userguide/nfs-on-snowcone.html
// [accesses NFS file servers]: https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html#accessing-nfs
func (c *Client) CreateLocationNfs(ctx context.Context, params *CreateLocationNfsInput, optFns ...func(*Options)) (*CreateLocationNfsOutput, error) {
if params == nil {
diff --git a/service/datasync/api_op_CreateLocationS3.go b/service/datasync/api_op_CreateLocationS3.go
index a2da3beb275..7f2009a2d06 100644
--- a/service/datasync/api_op_CreateLocationS3.go
+++ b/service/datasync/api_op_CreateLocationS3.go
@@ -58,9 +58,9 @@ type CreateLocationS3Input struct {
// Specifies the Amazon Resource Name (ARN) of the Identity and Access Management
// (IAM) role that DataSync uses to access your S3 bucket.
//
- // For more information, see [Accessing S3 buckets].
+ // For more information, see [Providing DataSync access to S3 buckets].
//
- // [Accessing S3 buckets]: https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html#create-s3-location-access
+ // [Providing DataSync access to S3 buckets]: https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html#create-s3-location-access
//
// This member is required.
S3Config *types.S3Config
diff --git a/service/datasync/api_op_DescribeLocationS3.go b/service/datasync/api_op_DescribeLocationS3.go
index 9856d4d768f..8fe7009227d 100644
--- a/service/datasync/api_op_DescribeLocationS3.go
+++ b/service/datasync/api_op_DescribeLocationS3.go
@@ -63,9 +63,9 @@ type DescribeLocationS3Output struct {
// Specifies the Amazon Resource Name (ARN) of the Identity and Access Management
// (IAM) role that DataSync uses to access your S3 bucket.
//
- // For more information, see [Accessing S3 buckets].
+ // For more information, see [Providing DataSync access to S3 buckets].
//
- // [Accessing S3 buckets]: https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html#create-s3-location-access
+ // [Providing DataSync access to S3 buckets]: https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html#create-s3-location-access
S3Config *types.S3Config
// When Amazon S3 is a destination location, this is the storage class that you
diff --git a/service/datasync/api_op_UpdateLocationAzureBlob.go b/service/datasync/api_op_UpdateLocationAzureBlob.go
index 5cc19054f1a..805f09045a6 100644
--- a/service/datasync/api_op_UpdateLocationAzureBlob.go
+++ b/service/datasync/api_op_UpdateLocationAzureBlob.go
@@ -11,8 +11,12 @@ import (
smithyhttp "github.com/aws/smithy-go/transport/http"
)
-// Modifies some configurations of the Microsoft Azure Blob Storage transfer
-// location that you're using with DataSync.
+// Modifies the following configurations of the Microsoft Azure Blob Storage
+// transfer location that you're using with DataSync.
+//
+// For more information, see [Configuring DataSync transfers with Azure Blob Storage].
+//
+// [Configuring DataSync transfers with Azure Blob Storage]: https://docs.aws.amazon.com/datasync/latest/userguide/creating-azure-blob-location.html
func (c *Client) UpdateLocationAzureBlob(ctx context.Context, params *UpdateLocationAzureBlobInput, optFns ...func(*Options)) (*UpdateLocationAzureBlobOutput, error) {
if params == nil {
params = &UpdateLocationAzureBlobInput{}
diff --git a/service/datasync/api_op_UpdateLocationEfs.go b/service/datasync/api_op_UpdateLocationEfs.go
new file mode 100644
index 00000000000..dd0d75815bb
--- /dev/null
+++ b/service/datasync/api_op_UpdateLocationEfs.go
@@ -0,0 +1,193 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package datasync
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/datasync/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Modifies the following configuration parameters of the Amazon EFS transfer
+// location that you're using with DataSync.
+//
+// For more information, see [Configuring DataSync transfers with Amazon EFS].
+//
+// [Configuring DataSync transfers with Amazon EFS]: https://docs.aws.amazon.com/datasync/latest/userguide/create-efs-location.html
+func (c *Client) UpdateLocationEfs(ctx context.Context, params *UpdateLocationEfsInput, optFns ...func(*Options)) (*UpdateLocationEfsOutput, error) {
+ if params == nil {
+ params = &UpdateLocationEfsInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "UpdateLocationEfs", params, optFns, c.addOperationUpdateLocationEfsMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*UpdateLocationEfsOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type UpdateLocationEfsInput struct {
+
+ // Specifies the Amazon Resource Name (ARN) of the Amazon EFS transfer location
+ // that you're updating.
+ //
+ // This member is required.
+ LocationArn *string
+
+ // Specifies the Amazon Resource Name (ARN) of the access point that DataSync uses
+ // to mount your Amazon EFS file system.
+ //
+ // For more information, see [Accessing restricted Amazon EFS file systems].
+ //
+ // [Accessing restricted Amazon EFS file systems]: https://docs.aws.amazon.com/datasync/latest/userguide/create-efs-location.html#create-efs-location-iam
+ AccessPointArn *string
+
+ // Specifies an Identity and Access Management (IAM) role that allows DataSync to
+ // access your Amazon EFS file system.
+ //
+ // For information on creating this role, see [Creating a DataSync IAM role for Amazon EFS file system access].
+ //
+ // [Creating a DataSync IAM role for Amazon EFS file system access]: https://docs.aws.amazon.com/datasync/latest/userguide/create-efs-location.html#create-efs-location-iam-role
+ FileSystemAccessRoleArn *string
+
+ // Specifies whether you want DataSync to use Transport Layer Security (TLS) 1.2
+ // encryption when it transfers data to or from your Amazon EFS file system.
+ //
+ // If you specify an access point using AccessPointArn or an IAM role using
+ // FileSystemAccessRoleArn , you must set this parameter to TLS1_2 .
+ InTransitEncryption types.EfsInTransitEncryption
+
+ // Specifies a mount path for your Amazon EFS file system. This is where DataSync
+ // reads or writes data on your file system (depending on if this is a source or
+ // destination location).
+ //
+ // By default, DataSync uses the root directory (or [access point] if you provide one by using
+ // AccessPointArn ). You can also include subdirectories using forward slashes (for
+ // example, /path/to/folder ).
+ //
+ // [access point]: https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html
+ Subdirectory *string
+
+ noSmithyDocumentSerde
+}
+
+type UpdateLocationEfsOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationUpdateLocationEfsMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsAwsjson11_serializeOpUpdateLocationEfs{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsAwsjson11_deserializeOpUpdateLocationEfs{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "UpdateLocationEfs"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpUpdateLocationEfsValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opUpdateLocationEfs(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opUpdateLocationEfs(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "UpdateLocationEfs",
+ }
+}
diff --git a/service/datasync/api_op_UpdateLocationFsxLustre.go b/service/datasync/api_op_UpdateLocationFsxLustre.go
new file mode 100644
index 00000000000..064d0ea9c8a
--- /dev/null
+++ b/service/datasync/api_op_UpdateLocationFsxLustre.go
@@ -0,0 +1,167 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package datasync
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Modifies the following configuration parameters of the Amazon FSx for Lustre
+// transfer location that you're using with DataSync.
+//
+// For more information, see [Configuring DataSync transfers with FSx for Lustre].
+//
+// [Configuring DataSync transfers with FSx for Lustre]: https://docs.aws.amazon.com/datasync/latest/userguide/create-lustre-location.html
+func (c *Client) UpdateLocationFsxLustre(ctx context.Context, params *UpdateLocationFsxLustreInput, optFns ...func(*Options)) (*UpdateLocationFsxLustreOutput, error) {
+ if params == nil {
+ params = &UpdateLocationFsxLustreInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "UpdateLocationFsxLustre", params, optFns, c.addOperationUpdateLocationFsxLustreMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*UpdateLocationFsxLustreOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type UpdateLocationFsxLustreInput struct {
+
+ // Specifies the Amazon Resource Name (ARN) of the FSx for Lustre transfer
+ // location that you're updating.
+ //
+ // This member is required.
+ LocationArn *string
+
+ // Specifies a mount path for your FSx for Lustre file system. The path can
+ // include subdirectories.
+ //
+ // When the location is used as a source, DataSync reads data from the mount path.
+ // When the location is used as a destination, DataSync writes data to the mount
+ // path. If you don't include this parameter, DataSync uses the file system's root
+ // directory ( / ).
+ Subdirectory *string
+
+ noSmithyDocumentSerde
+}
+
+type UpdateLocationFsxLustreOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationUpdateLocationFsxLustreMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsAwsjson11_serializeOpUpdateLocationFsxLustre{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsAwsjson11_deserializeOpUpdateLocationFsxLustre{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "UpdateLocationFsxLustre"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpUpdateLocationFsxLustreValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opUpdateLocationFsxLustre(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opUpdateLocationFsxLustre(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "UpdateLocationFsxLustre",
+ }
+}
diff --git a/service/datasync/api_op_UpdateLocationFsxOntap.go b/service/datasync/api_op_UpdateLocationFsxOntap.go
new file mode 100644
index 00000000000..27cbece24de
--- /dev/null
+++ b/service/datasync/api_op_UpdateLocationFsxOntap.go
@@ -0,0 +1,176 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package datasync
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/datasync/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Modifies the following configuration parameters of the Amazon FSx for NetApp
+// ONTAP transfer location that you're using with DataSync.
+//
+// For more information, see [Configuring DataSync transfers with FSx for ONTAP].
+//
+// [Configuring DataSync transfers with FSx for ONTAP]: https://docs.aws.amazon.com/datasync/latest/userguide/create-ontap-location.html
+func (c *Client) UpdateLocationFsxOntap(ctx context.Context, params *UpdateLocationFsxOntapInput, optFns ...func(*Options)) (*UpdateLocationFsxOntapOutput, error) {
+ if params == nil {
+ params = &UpdateLocationFsxOntapInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "UpdateLocationFsxOntap", params, optFns, c.addOperationUpdateLocationFsxOntapMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*UpdateLocationFsxOntapOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type UpdateLocationFsxOntapInput struct {
+
+ // Specifies the Amazon Resource Name (ARN) of the FSx for ONTAP transfer location
+ // that you're updating.
+ //
+ // This member is required.
+ LocationArn *string
+
+ // Specifies the data transfer protocol that DataSync uses to access your Amazon
+ // FSx file system.
+ Protocol *types.FsxUpdateProtocol
+
+ // Specifies a path to the file share in the storage virtual machine (SVM) where
+ // you want to transfer data to or from.
+ //
+ // You can specify a junction path (also known as a mount point), qtree path (for
+ // NFS file shares), or share name (for SMB file shares). For example, your mount
+ // path might be /vol1 , /vol1/tree1 , or /share1 .
+ //
+ // Don't specify a junction path in the SVM's root volume. For more information,
+ // see [Managing FSx for ONTAP storage virtual machines]in the Amazon FSx for NetApp ONTAP User Guide.
+ //
+ // [Managing FSx for ONTAP storage virtual machines]: https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/managing-svms.html
+ Subdirectory *string
+
+ noSmithyDocumentSerde
+}
+
+type UpdateLocationFsxOntapOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationUpdateLocationFsxOntapMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsAwsjson11_serializeOpUpdateLocationFsxOntap{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsAwsjson11_deserializeOpUpdateLocationFsxOntap{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "UpdateLocationFsxOntap"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpUpdateLocationFsxOntapValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opUpdateLocationFsxOntap(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opUpdateLocationFsxOntap(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "UpdateLocationFsxOntap",
+ }
+}
diff --git a/service/datasync/api_op_UpdateLocationFsxOpenZfs.go b/service/datasync/api_op_UpdateLocationFsxOpenZfs.go
new file mode 100644
index 00000000000..f96b6ce4be0
--- /dev/null
+++ b/service/datasync/api_op_UpdateLocationFsxOpenZfs.go
@@ -0,0 +1,171 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package datasync
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/datasync/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Modifies the following configuration parameters of the Amazon FSx for OpenZFS
+// transfer location that you're using with DataSync.
+//
+// For more information, see [Configuring DataSync transfers with FSx for OpenZFS].
+//
+// Request parameters related to SMB aren't supported with the
+// UpdateLocationFsxOpenZfs operation.
+//
+// [Configuring DataSync transfers with FSx for OpenZFS]: https://docs.aws.amazon.com/datasync/latest/userguide/create-openzfs-location.html
+func (c *Client) UpdateLocationFsxOpenZfs(ctx context.Context, params *UpdateLocationFsxOpenZfsInput, optFns ...func(*Options)) (*UpdateLocationFsxOpenZfsOutput, error) {
+ if params == nil {
+ params = &UpdateLocationFsxOpenZfsInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "UpdateLocationFsxOpenZfs", params, optFns, c.addOperationUpdateLocationFsxOpenZfsMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*UpdateLocationFsxOpenZfsOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type UpdateLocationFsxOpenZfsInput struct {
+
+ // Specifies the Amazon Resource Name (ARN) of the FSx for OpenZFS transfer
+ // location that you're updating.
+ //
+ // This member is required.
+ LocationArn *string
+
+ // Specifies the data transfer protocol that DataSync uses to access your Amazon
+ // FSx file system.
+ Protocol *types.FsxProtocol
+
+ // Specifies a subdirectory in the location's path that must begin with /fsx .
+ // DataSync uses this subdirectory to read or write data (depending on whether the
+ // file system is a source or destination location).
+ Subdirectory *string
+
+ noSmithyDocumentSerde
+}
+
+type UpdateLocationFsxOpenZfsOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationUpdateLocationFsxOpenZfsMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsAwsjson11_serializeOpUpdateLocationFsxOpenZfs{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsAwsjson11_deserializeOpUpdateLocationFsxOpenZfs{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "UpdateLocationFsxOpenZfs"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpUpdateLocationFsxOpenZfsValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opUpdateLocationFsxOpenZfs(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opUpdateLocationFsxOpenZfs(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "UpdateLocationFsxOpenZfs",
+ }
+}
diff --git a/service/datasync/api_op_UpdateLocationFsxWindows.go b/service/datasync/api_op_UpdateLocationFsxWindows.go
new file mode 100644
index 00000000000..981bfc35483
--- /dev/null
+++ b/service/datasync/api_op_UpdateLocationFsxWindows.go
@@ -0,0 +1,184 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package datasync
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Modifies the following configuration parameters of the Amazon FSx for Windows
+// File Server transfer location that you're using with DataSync.
+//
+// For more information, see [Configuring DataSync transfers with FSx for Windows File Server].
+//
+// [Configuring DataSync transfers with FSx for Windows File Server]: https://docs.aws.amazon.com/datasync/latest/userguide/create-fsx-location.html
+func (c *Client) UpdateLocationFsxWindows(ctx context.Context, params *UpdateLocationFsxWindowsInput, optFns ...func(*Options)) (*UpdateLocationFsxWindowsOutput, error) {
+ if params == nil {
+ params = &UpdateLocationFsxWindowsInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "UpdateLocationFsxWindows", params, optFns, c.addOperationUpdateLocationFsxWindowsMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*UpdateLocationFsxWindowsOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type UpdateLocationFsxWindowsInput struct {
+
+ // Specifies the ARN of the FSx for Windows File Server transfer location that
+ // you're updating.
+ //
+ // This member is required.
+ LocationArn *string
+
+ // Specifies the name of the Windows domain that your FSx for Windows File Server
+ // file system belongs to.
+ //
+ // If you have multiple Active Directory domains in your environment, configuring
+ // this parameter makes sure that DataSync connects to the right file system.
+ Domain *string
+
+ // Specifies the password of the user with the permissions to mount and access the
+ // files, folders, and file metadata in your FSx for Windows File Server file
+ // system.
+ Password *string
+
+ // Specifies a mount path for your file system using forward slashes. DataSync
+ // uses this subdirectory to read or write data (depending on whether the file
+ // system is a source or destination location).
+ Subdirectory *string
+
+ // Specifies the user with the permissions to mount and access the files, folders,
+ // and file metadata in your FSx for Windows File Server file system.
+ //
+ // For information about choosing a user with the right level of access for your
+ // transfer, see [required permissions]for FSx for Windows File Server locations.
+ //
+ // [required permissions]: https://docs.aws.amazon.com/datasync/latest/userguide/create-fsx-location.html#create-fsx-windows-location-permissions
+ User *string
+
+ noSmithyDocumentSerde
+}
+
+type UpdateLocationFsxWindowsOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationUpdateLocationFsxWindowsMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsAwsjson11_serializeOpUpdateLocationFsxWindows{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsAwsjson11_deserializeOpUpdateLocationFsxWindows{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "UpdateLocationFsxWindows"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpUpdateLocationFsxWindowsValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opUpdateLocationFsxWindows(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opUpdateLocationFsxWindows(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "UpdateLocationFsxWindows",
+ }
+}
diff --git a/service/datasync/api_op_UpdateLocationHdfs.go b/service/datasync/api_op_UpdateLocationHdfs.go
index 85f8311d287..b56763bdbb2 100644
--- a/service/datasync/api_op_UpdateLocationHdfs.go
+++ b/service/datasync/api_op_UpdateLocationHdfs.go
@@ -11,8 +11,12 @@ import (
smithyhttp "github.com/aws/smithy-go/transport/http"
)
-// Updates some parameters of a previously created location for a Hadoop
-// Distributed File System cluster.
+// Modifies the following configuration parameters of the Hadoop Distributed File
+// System (HDFS) transfer location that you're using with DataSync.
+//
+// For more information, see [Configuring DataSync transfers with an HDFS cluster].
+//
+// [Configuring DataSync transfers with an HDFS cluster]: https://docs.aws.amazon.com/datasync/latest/userguide/create-hdfs-location.html
func (c *Client) UpdateLocationHdfs(ctx context.Context, params *UpdateLocationHdfsInput, optFns ...func(*Options)) (*UpdateLocationHdfsOutput, error) {
if params == nil {
params = &UpdateLocationHdfsInput{}
diff --git a/service/datasync/api_op_UpdateLocationNfs.go b/service/datasync/api_op_UpdateLocationNfs.go
index 7d767bf84db..c6c08d4abbe 100644
--- a/service/datasync/api_op_UpdateLocationNfs.go
+++ b/service/datasync/api_op_UpdateLocationNfs.go
@@ -11,12 +11,12 @@ import (
smithyhttp "github.com/aws/smithy-go/transport/http"
)
-// Modifies some configurations of the Network File System (NFS) transfer location
-// that you're using with DataSync.
+// Modifies the following configuration parameters of the Network File System
+// (NFS) transfer location that you're using with DataSync.
//
-// For more information, see [Configuring transfers to or from an NFS file server].
+// For more information, see [Configuring transfers with an NFS file server].
//
-// [Configuring transfers to or from an NFS file server]: https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html
+// [Configuring transfers with an NFS file server]: https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html
func (c *Client) UpdateLocationNfs(ctx context.Context, params *UpdateLocationNfsInput, optFns ...func(*Options)) (*UpdateLocationNfsOutput, error) {
if params == nil {
params = &UpdateLocationNfsInput{}
diff --git a/service/datasync/api_op_UpdateLocationObjectStorage.go b/service/datasync/api_op_UpdateLocationObjectStorage.go
index 92b05784c03..48eace52883 100644
--- a/service/datasync/api_op_UpdateLocationObjectStorage.go
+++ b/service/datasync/api_op_UpdateLocationObjectStorage.go
@@ -11,8 +11,12 @@ import (
smithyhttp "github.com/aws/smithy-go/transport/http"
)
-// Updates some parameters of an existing DataSync location for an object storage
-// system.
+// Modifies the following configuration parameters of the object storage transfer
+// location that you're using with DataSync.
+//
+// For more information, see [Configuring DataSync transfers with an object storage system].
+//
+// [Configuring DataSync transfers with an object storage system]: https://docs.aws.amazon.com/datasync/latest/userguide/create-object-location.html
func (c *Client) UpdateLocationObjectStorage(ctx context.Context, params *UpdateLocationObjectStorageInput, optFns ...func(*Options)) (*UpdateLocationObjectStorageOutput, error) {
if params == nil {
params = &UpdateLocationObjectStorageInput{}
diff --git a/service/datasync/api_op_UpdateLocationS3.go b/service/datasync/api_op_UpdateLocationS3.go
new file mode 100644
index 00000000000..48728a88c34
--- /dev/null
+++ b/service/datasync/api_op_UpdateLocationS3.go
@@ -0,0 +1,198 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package datasync
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/datasync/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Modifies the following configuration parameters of the Amazon S3 transfer
+// location that you're using with DataSync.
+//
+// Before you begin, make sure that you read the following topics:
+//
+// [Storage class considerations with Amazon S3 locations]
+//
+// [Evaluating S3 request costs when using DataSync]
+//
+// [Storage class considerations with Amazon S3 locations]: https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html#using-storage-classes
+// [Evaluating S3 request costs when using DataSync]: https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html#create-s3-location-s3-requests
+func (c *Client) UpdateLocationS3(ctx context.Context, params *UpdateLocationS3Input, optFns ...func(*Options)) (*UpdateLocationS3Output, error) {
+ if params == nil {
+ params = &UpdateLocationS3Input{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "UpdateLocationS3", params, optFns, c.addOperationUpdateLocationS3Middlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*UpdateLocationS3Output)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type UpdateLocationS3Input struct {
+
+ // Specifies the Amazon Resource Name (ARN) of the Amazon S3 transfer location
+ // that you're updating.
+ //
+ // This member is required.
+ LocationArn *string
+
+ // Specifies the Amazon Resource Name (ARN) of the Identity and Access Management
+ // (IAM) role that DataSync uses to access your S3 bucket.
+ //
+ // For more information, see [Providing DataSync access to S3 buckets].
+ //
+ // [Providing DataSync access to S3 buckets]: https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html#create-s3-location-access
+ S3Config *types.S3Config
+
+ // Specifies the storage class that you want your objects to use when Amazon S3 is
+ // a transfer destination.
+ //
+ // For buckets in Amazon Web Services Regions, the storage class defaults to
+ // STANDARD . For buckets on Outposts, the storage class defaults to OUTPOSTS .
+ //
+ // For more information, see [Storage class considerations with Amazon S3 transfers].
+ //
+ // [Storage class considerations with Amazon S3 transfers]: https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html#using-storage-classes
+ S3StorageClass types.S3StorageClass
+
+ // Specifies a prefix in the S3 bucket that DataSync reads from or writes to
+ // (depending on whether the bucket is a source or destination location).
+ //
+ // DataSync can't transfer objects with a prefix that begins with a slash ( / ) or
+ // includes // , /./ , or /../ patterns. For example:
+ //
+ // - /photos
+ //
+ // - photos//2006/January
+ //
+ // - photos/./2006/February
+ //
+ // - photos/../2006/March
+ Subdirectory *string
+
+ noSmithyDocumentSerde
+}
+
+type UpdateLocationS3Output struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationUpdateLocationS3Middlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsAwsjson11_serializeOpUpdateLocationS3{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsAwsjson11_deserializeOpUpdateLocationS3{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "UpdateLocationS3"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpUpdateLocationS3ValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opUpdateLocationS3(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opUpdateLocationS3(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "UpdateLocationS3",
+ }
+}
diff --git a/service/datasync/api_op_UpdateLocationSmb.go b/service/datasync/api_op_UpdateLocationSmb.go
index c9e084ec105..a5104ebffab 100644
--- a/service/datasync/api_op_UpdateLocationSmb.go
+++ b/service/datasync/api_op_UpdateLocationSmb.go
@@ -11,8 +11,12 @@ import (
smithyhttp "github.com/aws/smithy-go/transport/http"
)
-// Updates some of the parameters of a Server Message Block (SMB) file server
-// location that you can use for DataSync transfers.
+// Modifies the following configuration parameters of the Server Message Block
+// (SMB) transfer location that you're using with DataSync.
+//
+// For more information, see [Configuring DataSync transfers with an SMB file server].
+//
+// [Configuring DataSync transfers with an SMB file server]: https://docs.aws.amazon.com/datasync/latest/userguide/create-smb-location.html
func (c *Client) UpdateLocationSmb(ctx context.Context, params *UpdateLocationSmbInput, optFns ...func(*Options)) (*UpdateLocationSmbOutput, error) {
if params == nil {
params = &UpdateLocationSmbInput{}
diff --git a/service/datasync/deserializers.go b/service/datasync/deserializers.go
index a20d2ee3e61..85a05780589 100644
--- a/service/datasync/deserializers.go
+++ b/service/datasync/deserializers.go
@@ -6073,14 +6073,698 @@ func awsAwsjson11_deserializeOpErrorUpdateLocationAzureBlob(response *smithyhttp
}
}
+type awsAwsjson11_deserializeOpUpdateLocationEfs struct {
+}
+
+func (*awsAwsjson11_deserializeOpUpdateLocationEfs) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsAwsjson11_deserializeOpUpdateLocationEfs) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsAwsjson11_deserializeOpErrorUpdateLocationEfs(response, &metadata)
+ }
+ output := &UpdateLocationEfsOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsAwsjson11_deserializeOpDocumentUpdateLocationEfsOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ return out, metadata, err
+}
+
+func awsAwsjson11_deserializeOpErrorUpdateLocationEfs(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ bodyInfo, err := getProtocolErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if typ, ok := resolveProtocolErrorType(headerCode, bodyInfo); ok {
+ errorCode = restjson.SanitizeErrorCode(typ)
+ }
+ if len(bodyInfo.Message) != 0 {
+ errorMessage = bodyInfo.Message
+ }
+ switch {
+ case strings.EqualFold("InternalException", errorCode):
+ return awsAwsjson11_deserializeErrorInternalException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsAwsjson11_deserializeErrorInvalidRequestException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+type awsAwsjson11_deserializeOpUpdateLocationFsxLustre struct {
+}
+
+func (*awsAwsjson11_deserializeOpUpdateLocationFsxLustre) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsAwsjson11_deserializeOpUpdateLocationFsxLustre) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsAwsjson11_deserializeOpErrorUpdateLocationFsxLustre(response, &metadata)
+ }
+ output := &UpdateLocationFsxLustreOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsAwsjson11_deserializeOpDocumentUpdateLocationFsxLustreOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ return out, metadata, err
+}
+
+func awsAwsjson11_deserializeOpErrorUpdateLocationFsxLustre(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ bodyInfo, err := getProtocolErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if typ, ok := resolveProtocolErrorType(headerCode, bodyInfo); ok {
+ errorCode = restjson.SanitizeErrorCode(typ)
+ }
+ if len(bodyInfo.Message) != 0 {
+ errorMessage = bodyInfo.Message
+ }
+ switch {
+ case strings.EqualFold("InternalException", errorCode):
+ return awsAwsjson11_deserializeErrorInternalException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsAwsjson11_deserializeErrorInvalidRequestException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+type awsAwsjson11_deserializeOpUpdateLocationFsxOntap struct {
+}
+
+func (*awsAwsjson11_deserializeOpUpdateLocationFsxOntap) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsAwsjson11_deserializeOpUpdateLocationFsxOntap) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsAwsjson11_deserializeOpErrorUpdateLocationFsxOntap(response, &metadata)
+ }
+ output := &UpdateLocationFsxOntapOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsAwsjson11_deserializeOpDocumentUpdateLocationFsxOntapOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ return out, metadata, err
+}
+
+func awsAwsjson11_deserializeOpErrorUpdateLocationFsxOntap(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ bodyInfo, err := getProtocolErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if typ, ok := resolveProtocolErrorType(headerCode, bodyInfo); ok {
+ errorCode = restjson.SanitizeErrorCode(typ)
+ }
+ if len(bodyInfo.Message) != 0 {
+ errorMessage = bodyInfo.Message
+ }
+ switch {
+ case strings.EqualFold("InternalException", errorCode):
+ return awsAwsjson11_deserializeErrorInternalException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsAwsjson11_deserializeErrorInvalidRequestException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+type awsAwsjson11_deserializeOpUpdateLocationFsxOpenZfs struct {
+}
+
+func (*awsAwsjson11_deserializeOpUpdateLocationFsxOpenZfs) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsAwsjson11_deserializeOpUpdateLocationFsxOpenZfs) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsAwsjson11_deserializeOpErrorUpdateLocationFsxOpenZfs(response, &metadata)
+ }
+ output := &UpdateLocationFsxOpenZfsOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsAwsjson11_deserializeOpDocumentUpdateLocationFsxOpenZfsOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ return out, metadata, err
+}
+
+func awsAwsjson11_deserializeOpErrorUpdateLocationFsxOpenZfs(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ bodyInfo, err := getProtocolErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if typ, ok := resolveProtocolErrorType(headerCode, bodyInfo); ok {
+ errorCode = restjson.SanitizeErrorCode(typ)
+ }
+ if len(bodyInfo.Message) != 0 {
+ errorMessage = bodyInfo.Message
+ }
+ switch {
+ case strings.EqualFold("InternalException", errorCode):
+ return awsAwsjson11_deserializeErrorInternalException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsAwsjson11_deserializeErrorInvalidRequestException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+type awsAwsjson11_deserializeOpUpdateLocationFsxWindows struct {
+}
+
+func (*awsAwsjson11_deserializeOpUpdateLocationFsxWindows) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsAwsjson11_deserializeOpUpdateLocationFsxWindows) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsAwsjson11_deserializeOpErrorUpdateLocationFsxWindows(response, &metadata)
+ }
+ output := &UpdateLocationFsxWindowsOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsAwsjson11_deserializeOpDocumentUpdateLocationFsxWindowsOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ return out, metadata, err
+}
+
+func awsAwsjson11_deserializeOpErrorUpdateLocationFsxWindows(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ bodyInfo, err := getProtocolErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if typ, ok := resolveProtocolErrorType(headerCode, bodyInfo); ok {
+ errorCode = restjson.SanitizeErrorCode(typ)
+ }
+ if len(bodyInfo.Message) != 0 {
+ errorMessage = bodyInfo.Message
+ }
+ switch {
+ case strings.EqualFold("InternalException", errorCode):
+ return awsAwsjson11_deserializeErrorInternalException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsAwsjson11_deserializeErrorInvalidRequestException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
type awsAwsjson11_deserializeOpUpdateLocationHdfs struct {
}
-func (*awsAwsjson11_deserializeOpUpdateLocationHdfs) ID() string {
+func (*awsAwsjson11_deserializeOpUpdateLocationHdfs) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsAwsjson11_deserializeOpUpdateLocationHdfs) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsAwsjson11_deserializeOpErrorUpdateLocationHdfs(response, &metadata)
+ }
+ output := &UpdateLocationHdfsOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsAwsjson11_deserializeOpDocumentUpdateLocationHdfsOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ return out, metadata, err
+}
+
+func awsAwsjson11_deserializeOpErrorUpdateLocationHdfs(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ bodyInfo, err := getProtocolErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if typ, ok := resolveProtocolErrorType(headerCode, bodyInfo); ok {
+ errorCode = restjson.SanitizeErrorCode(typ)
+ }
+ if len(bodyInfo.Message) != 0 {
+ errorMessage = bodyInfo.Message
+ }
+ switch {
+ case strings.EqualFold("InternalException", errorCode):
+ return awsAwsjson11_deserializeErrorInternalException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsAwsjson11_deserializeErrorInvalidRequestException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+type awsAwsjson11_deserializeOpUpdateLocationNfs struct {
+}
+
+func (*awsAwsjson11_deserializeOpUpdateLocationNfs) ID() string {
return "OperationDeserializer"
}
-func (m *awsAwsjson11_deserializeOpUpdateLocationHdfs) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsAwsjson11_deserializeOpUpdateLocationNfs) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -6098,9 +6782,9 @@ func (m *awsAwsjson11_deserializeOpUpdateLocationHdfs) HandleDeserialize(ctx con
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsAwsjson11_deserializeOpErrorUpdateLocationHdfs(response, &metadata)
+ return out, metadata, awsAwsjson11_deserializeOpErrorUpdateLocationNfs(response, &metadata)
}
- output := &UpdateLocationHdfsOutput{}
+ output := &UpdateLocationNfsOutput{}
out.Result = output
var buff [1024]byte
@@ -6120,7 +6804,7 @@ func (m *awsAwsjson11_deserializeOpUpdateLocationHdfs) HandleDeserialize(ctx con
return out, metadata, err
}
- err = awsAwsjson11_deserializeOpDocumentUpdateLocationHdfsOutput(&output, shape)
+ err = awsAwsjson11_deserializeOpDocumentUpdateLocationNfsOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -6134,7 +6818,7 @@ func (m *awsAwsjson11_deserializeOpUpdateLocationHdfs) HandleDeserialize(ctx con
return out, metadata, err
}
-func awsAwsjson11_deserializeOpErrorUpdateLocationHdfs(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsAwsjson11_deserializeOpErrorUpdateLocationNfs(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -6187,14 +6871,14 @@ func awsAwsjson11_deserializeOpErrorUpdateLocationHdfs(response *smithyhttp.Resp
}
}
-type awsAwsjson11_deserializeOpUpdateLocationNfs struct {
+type awsAwsjson11_deserializeOpUpdateLocationObjectStorage struct {
}
-func (*awsAwsjson11_deserializeOpUpdateLocationNfs) ID() string {
+func (*awsAwsjson11_deserializeOpUpdateLocationObjectStorage) ID() string {
return "OperationDeserializer"
}
-func (m *awsAwsjson11_deserializeOpUpdateLocationNfs) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsAwsjson11_deserializeOpUpdateLocationObjectStorage) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -6212,9 +6896,9 @@ func (m *awsAwsjson11_deserializeOpUpdateLocationNfs) HandleDeserialize(ctx cont
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsAwsjson11_deserializeOpErrorUpdateLocationNfs(response, &metadata)
+ return out, metadata, awsAwsjson11_deserializeOpErrorUpdateLocationObjectStorage(response, &metadata)
}
- output := &UpdateLocationNfsOutput{}
+ output := &UpdateLocationObjectStorageOutput{}
out.Result = output
var buff [1024]byte
@@ -6234,7 +6918,7 @@ func (m *awsAwsjson11_deserializeOpUpdateLocationNfs) HandleDeserialize(ctx cont
return out, metadata, err
}
- err = awsAwsjson11_deserializeOpDocumentUpdateLocationNfsOutput(&output, shape)
+ err = awsAwsjson11_deserializeOpDocumentUpdateLocationObjectStorageOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -6248,7 +6932,7 @@ func (m *awsAwsjson11_deserializeOpUpdateLocationNfs) HandleDeserialize(ctx cont
return out, metadata, err
}
-func awsAwsjson11_deserializeOpErrorUpdateLocationNfs(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsAwsjson11_deserializeOpErrorUpdateLocationObjectStorage(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -6301,14 +6985,14 @@ func awsAwsjson11_deserializeOpErrorUpdateLocationNfs(response *smithyhttp.Respo
}
}
-type awsAwsjson11_deserializeOpUpdateLocationObjectStorage struct {
+type awsAwsjson11_deserializeOpUpdateLocationS3 struct {
}
-func (*awsAwsjson11_deserializeOpUpdateLocationObjectStorage) ID() string {
+func (*awsAwsjson11_deserializeOpUpdateLocationS3) ID() string {
return "OperationDeserializer"
}
-func (m *awsAwsjson11_deserializeOpUpdateLocationObjectStorage) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsAwsjson11_deserializeOpUpdateLocationS3) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -6326,9 +7010,9 @@ func (m *awsAwsjson11_deserializeOpUpdateLocationObjectStorage) HandleDeserializ
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsAwsjson11_deserializeOpErrorUpdateLocationObjectStorage(response, &metadata)
+ return out, metadata, awsAwsjson11_deserializeOpErrorUpdateLocationS3(response, &metadata)
}
- output := &UpdateLocationObjectStorageOutput{}
+ output := &UpdateLocationS3Output{}
out.Result = output
var buff [1024]byte
@@ -6348,7 +7032,7 @@ func (m *awsAwsjson11_deserializeOpUpdateLocationObjectStorage) HandleDeserializ
return out, metadata, err
}
- err = awsAwsjson11_deserializeOpDocumentUpdateLocationObjectStorageOutput(&output, shape)
+ err = awsAwsjson11_deserializeOpDocumentUpdateLocationS3Output(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -6362,7 +7046,7 @@ func (m *awsAwsjson11_deserializeOpUpdateLocationObjectStorage) HandleDeserializ
return out, metadata, err
}
-func awsAwsjson11_deserializeOpErrorUpdateLocationObjectStorage(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsAwsjson11_deserializeOpErrorUpdateLocationS3(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -14807,6 +15491,161 @@ func awsAwsjson11_deserializeOpDocumentUpdateLocationAzureBlobOutput(v **UpdateL
return nil
}
+func awsAwsjson11_deserializeOpDocumentUpdateLocationEfsOutput(v **UpdateLocationEfsOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *UpdateLocationEfsOutput
+ if *v == nil {
+ sv = &UpdateLocationEfsOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsAwsjson11_deserializeOpDocumentUpdateLocationFsxLustreOutput(v **UpdateLocationFsxLustreOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *UpdateLocationFsxLustreOutput
+ if *v == nil {
+ sv = &UpdateLocationFsxLustreOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsAwsjson11_deserializeOpDocumentUpdateLocationFsxOntapOutput(v **UpdateLocationFsxOntapOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *UpdateLocationFsxOntapOutput
+ if *v == nil {
+ sv = &UpdateLocationFsxOntapOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsAwsjson11_deserializeOpDocumentUpdateLocationFsxOpenZfsOutput(v **UpdateLocationFsxOpenZfsOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *UpdateLocationFsxOpenZfsOutput
+ if *v == nil {
+ sv = &UpdateLocationFsxOpenZfsOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsAwsjson11_deserializeOpDocumentUpdateLocationFsxWindowsOutput(v **UpdateLocationFsxWindowsOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *UpdateLocationFsxWindowsOutput
+ if *v == nil {
+ sv = &UpdateLocationFsxWindowsOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsAwsjson11_deserializeOpDocumentUpdateLocationHdfsOutput(v **UpdateLocationHdfsOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -14900,6 +15739,37 @@ func awsAwsjson11_deserializeOpDocumentUpdateLocationObjectStorageOutput(v **Upd
return nil
}
+func awsAwsjson11_deserializeOpDocumentUpdateLocationS3Output(v **UpdateLocationS3Output, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *UpdateLocationS3Output
+ if *v == nil {
+ sv = &UpdateLocationS3Output{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsAwsjson11_deserializeOpDocumentUpdateLocationSmbOutput(v **UpdateLocationSmbOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
diff --git a/service/datasync/generated.json b/service/datasync/generated.json
index 69e02523087..1c86a23a750 100644
--- a/service/datasync/generated.json
+++ b/service/datasync/generated.json
@@ -61,9 +61,15 @@
"api_op_UpdateAgent.go",
"api_op_UpdateDiscoveryJob.go",
"api_op_UpdateLocationAzureBlob.go",
+ "api_op_UpdateLocationEfs.go",
+ "api_op_UpdateLocationFsxLustre.go",
+ "api_op_UpdateLocationFsxOntap.go",
+ "api_op_UpdateLocationFsxOpenZfs.go",
+ "api_op_UpdateLocationFsxWindows.go",
"api_op_UpdateLocationHdfs.go",
"api_op_UpdateLocationNfs.go",
"api_op_UpdateLocationObjectStorage.go",
+ "api_op_UpdateLocationS3.go",
"api_op_UpdateLocationSmb.go",
"api_op_UpdateStorageSystem.go",
"api_op_UpdateTask.go",
diff --git a/service/datasync/go_module_metadata.go b/service/datasync/go_module_metadata.go
index e7555b67993..436154a984c 100644
--- a/service/datasync/go_module_metadata.go
+++ b/service/datasync/go_module_metadata.go
@@ -3,4 +3,4 @@
package datasync
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.43.5"
+const goModuleVersion = "1.44.0"
diff --git a/service/datasync/internal/endpoints/endpoints.go b/service/datasync/internal/endpoints/endpoints.go
index 25a0a90809d..14d3db3c1de 100644
--- a/service/datasync/internal/endpoints/endpoints.go
+++ b/service/datasync/internal/endpoints/endpoints.go
@@ -142,39 +142,111 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "af-south-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "af-south-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.af-south-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-east-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-east-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.ap-east-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-northeast-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-northeast-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.ap-northeast-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-northeast-2",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-northeast-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.ap-northeast-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-northeast-3",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-northeast-3",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.ap-northeast-3.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-south-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-south-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.ap-south-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-south-2",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-south-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.ap-south-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.ap-southeast-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-2",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.ap-southeast-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-3",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-3",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.ap-southeast-3.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-4",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-4",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.ap-southeast-4.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-5",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-5",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.ap-southeast-5.api.aws",
+ },
endpoints.EndpointKey{
Region: "ca-central-1",
}: endpoints.Endpoint{},
@@ -184,6 +256,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "datasync-fips.ca-central-1.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "ca-central-1",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync-fips.ca-central-1.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "ca-central-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.ca-central-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ca-west-1",
}: endpoints.Endpoint{},
@@ -193,30 +277,90 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "datasync-fips.ca-west-1.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "ca-west-1",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync-fips.ca-west-1.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "ca-west-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.ca-west-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-central-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-central-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.eu-central-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-central-2",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-central-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.eu-central-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-north-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-north-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.eu-north-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-south-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-south-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.eu-south-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-south-2",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-south-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.eu-south-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-west-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-west-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.eu-west-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-west-2",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-west-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.eu-west-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-west-3",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-west-3",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.eu-west-3.api.aws",
+ },
endpoints.EndpointKey{
Region: "fips-ca-central-1",
}: endpoints.Endpoint{
@@ -274,15 +418,39 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "il-central-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "il-central-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.il-central-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "me-central-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "me-central-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.me-central-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "me-south-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "me-south-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.me-south-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "sa-east-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "sa-east-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.sa-east-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "us-east-1",
}: endpoints.Endpoint{},
@@ -292,6 +460,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "datasync-fips.us-east-1.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-east-1",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync-fips.us-east-1.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-east-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.us-east-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "us-east-2",
}: endpoints.Endpoint{},
@@ -301,6 +481,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "datasync-fips.us-east-2.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-east-2",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync-fips.us-east-2.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-east-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.us-east-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "us-west-1",
}: endpoints.Endpoint{},
@@ -310,6 +502,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "datasync-fips.us-west-1.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-west-1",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync-fips.us-west-1.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-west-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.us-west-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "us-west-2",
}: endpoints.Endpoint{},
@@ -319,6 +523,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "datasync-fips.us-west-2.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-west-2",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync-fips.us-west-2.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-west-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.us-west-2.api.aws",
+ },
},
},
{
@@ -359,9 +575,21 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "cn-north-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "cn-north-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.cn-north-1.api.amazonwebservices.com.cn",
+ },
endpoints.EndpointKey{
Region: "cn-northwest-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "cn-northwest-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.cn-northwest-1.api.amazonwebservices.com.cn",
+ },
},
},
{
@@ -548,6 +776,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "datasync-fips.us-gov-east-1.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-gov-east-1",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync-fips.us-gov-east-1.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-gov-east-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.us-gov-east-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "us-gov-west-1",
}: endpoints.Endpoint{},
@@ -557,6 +797,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "datasync-fips.us-gov-west-1.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-gov-west-1",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync-fips.us-gov-west-1.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-gov-west-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "datasync.us-gov-west-1.api.aws",
+ },
},
},
}
diff --git a/service/datasync/serializers.go b/service/datasync/serializers.go
index 35f1b15f2b7..2b858def885 100644
--- a/service/datasync/serializers.go
+++ b/service/datasync/serializers.go
@@ -3250,6 +3250,311 @@ func (m *awsAwsjson11_serializeOpUpdateLocationAzureBlob) HandleSerialize(ctx co
return next.HandleSerialize(ctx, in)
}
+type awsAwsjson11_serializeOpUpdateLocationEfs struct {
+}
+
+func (*awsAwsjson11_serializeOpUpdateLocationEfs) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsAwsjson11_serializeOpUpdateLocationEfs) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*UpdateLocationEfsInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ operationPath := "/"
+ if len(request.Request.URL.Path) == 0 {
+ request.Request.URL.Path = operationPath
+ } else {
+ request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+ if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+ request.Request.URL.Path += "/"
+ }
+ }
+ request.Request.Method = "POST"
+ httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ httpBindingEncoder.SetHeader("Content-Type").String("application/x-amz-json-1.1")
+ httpBindingEncoder.SetHeader("X-Amz-Target").String("FmrsService.UpdateLocationEfs")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsAwsjson11_serializeOpDocumentUpdateLocationEfsInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+
+type awsAwsjson11_serializeOpUpdateLocationFsxLustre struct {
+}
+
+func (*awsAwsjson11_serializeOpUpdateLocationFsxLustre) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsAwsjson11_serializeOpUpdateLocationFsxLustre) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*UpdateLocationFsxLustreInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ operationPath := "/"
+ if len(request.Request.URL.Path) == 0 {
+ request.Request.URL.Path = operationPath
+ } else {
+ request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+ if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+ request.Request.URL.Path += "/"
+ }
+ }
+ request.Request.Method = "POST"
+ httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ httpBindingEncoder.SetHeader("Content-Type").String("application/x-amz-json-1.1")
+ httpBindingEncoder.SetHeader("X-Amz-Target").String("FmrsService.UpdateLocationFsxLustre")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsAwsjson11_serializeOpDocumentUpdateLocationFsxLustreInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+
+type awsAwsjson11_serializeOpUpdateLocationFsxOntap struct {
+}
+
+func (*awsAwsjson11_serializeOpUpdateLocationFsxOntap) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsAwsjson11_serializeOpUpdateLocationFsxOntap) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*UpdateLocationFsxOntapInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ operationPath := "/"
+ if len(request.Request.URL.Path) == 0 {
+ request.Request.URL.Path = operationPath
+ } else {
+ request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+ if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+ request.Request.URL.Path += "/"
+ }
+ }
+ request.Request.Method = "POST"
+ httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ httpBindingEncoder.SetHeader("Content-Type").String("application/x-amz-json-1.1")
+ httpBindingEncoder.SetHeader("X-Amz-Target").String("FmrsService.UpdateLocationFsxOntap")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsAwsjson11_serializeOpDocumentUpdateLocationFsxOntapInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+
+type awsAwsjson11_serializeOpUpdateLocationFsxOpenZfs struct {
+}
+
+func (*awsAwsjson11_serializeOpUpdateLocationFsxOpenZfs) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsAwsjson11_serializeOpUpdateLocationFsxOpenZfs) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*UpdateLocationFsxOpenZfsInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ operationPath := "/"
+ if len(request.Request.URL.Path) == 0 {
+ request.Request.URL.Path = operationPath
+ } else {
+ request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+ if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+ request.Request.URL.Path += "/"
+ }
+ }
+ request.Request.Method = "POST"
+ httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ httpBindingEncoder.SetHeader("Content-Type").String("application/x-amz-json-1.1")
+ httpBindingEncoder.SetHeader("X-Amz-Target").String("FmrsService.UpdateLocationFsxOpenZfs")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsAwsjson11_serializeOpDocumentUpdateLocationFsxOpenZfsInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+
+type awsAwsjson11_serializeOpUpdateLocationFsxWindows struct {
+}
+
+func (*awsAwsjson11_serializeOpUpdateLocationFsxWindows) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsAwsjson11_serializeOpUpdateLocationFsxWindows) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*UpdateLocationFsxWindowsInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ operationPath := "/"
+ if len(request.Request.URL.Path) == 0 {
+ request.Request.URL.Path = operationPath
+ } else {
+ request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+ if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+ request.Request.URL.Path += "/"
+ }
+ }
+ request.Request.Method = "POST"
+ httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ httpBindingEncoder.SetHeader("Content-Type").String("application/x-amz-json-1.1")
+ httpBindingEncoder.SetHeader("X-Amz-Target").String("FmrsService.UpdateLocationFsxWindows")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsAwsjson11_serializeOpDocumentUpdateLocationFsxWindowsInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+
type awsAwsjson11_serializeOpUpdateLocationHdfs struct {
}
@@ -3433,6 +3738,67 @@ func (m *awsAwsjson11_serializeOpUpdateLocationObjectStorage) HandleSerialize(ct
return next.HandleSerialize(ctx, in)
}
+type awsAwsjson11_serializeOpUpdateLocationS3 struct {
+}
+
+func (*awsAwsjson11_serializeOpUpdateLocationS3) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsAwsjson11_serializeOpUpdateLocationS3) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*UpdateLocationS3Input)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ operationPath := "/"
+ if len(request.Request.URL.Path) == 0 {
+ request.Request.URL.Path = operationPath
+ } else {
+ request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+ if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+ request.Request.URL.Path += "/"
+ }
+ }
+ request.Request.Method = "POST"
+ httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ httpBindingEncoder.SetHeader("Content-Type").String("application/x-amz-json-1.1")
+ httpBindingEncoder.SetHeader("X-Amz-Target").String("FmrsService.UpdateLocationS3")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsAwsjson11_serializeOpDocumentUpdateLocationS3Input(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+
type awsAwsjson11_serializeOpUpdateLocationSmb struct {
}
@@ -3890,6 +4256,56 @@ func awsAwsjson11_serializeDocumentFsxProtocolSmb(v *types.FsxProtocolSmb, value
return nil
}
+func awsAwsjson11_serializeDocumentFsxUpdateProtocol(v *types.FsxUpdateProtocol, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.NFS != nil {
+ ok := object.Key("NFS")
+ if err := awsAwsjson11_serializeDocumentFsxProtocolNfs(v.NFS, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.SMB != nil {
+ ok := object.Key("SMB")
+ if err := awsAwsjson11_serializeDocumentFsxUpdateProtocolSmb(v.SMB, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func awsAwsjson11_serializeDocumentFsxUpdateProtocolSmb(v *types.FsxUpdateProtocolSmb, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.Domain != nil {
+ ok := object.Key("Domain")
+ ok.String(*v.Domain)
+ }
+
+ if v.MountOptions != nil {
+ ok := object.Key("MountOptions")
+ if err := awsAwsjson11_serializeDocumentSmbMountOptions(v.MountOptions, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.Password != nil {
+ ok := object.Key("Password")
+ ok.String(*v.Password)
+ }
+
+ if v.User != nil {
+ ok := object.Key("User")
+ ok.String(*v.User)
+ }
+
+ return nil
+}
+
func awsAwsjson11_serializeDocumentHdfsNameNode(v *types.HdfsNameNode, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
@@ -5860,6 +6276,135 @@ func awsAwsjson11_serializeOpDocumentUpdateLocationAzureBlobInput(v *UpdateLocat
return nil
}
+func awsAwsjson11_serializeOpDocumentUpdateLocationEfsInput(v *UpdateLocationEfsInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.AccessPointArn != nil {
+ ok := object.Key("AccessPointArn")
+ ok.String(*v.AccessPointArn)
+ }
+
+ if v.FileSystemAccessRoleArn != nil {
+ ok := object.Key("FileSystemAccessRoleArn")
+ ok.String(*v.FileSystemAccessRoleArn)
+ }
+
+ if len(v.InTransitEncryption) > 0 {
+ ok := object.Key("InTransitEncryption")
+ ok.String(string(v.InTransitEncryption))
+ }
+
+ if v.LocationArn != nil {
+ ok := object.Key("LocationArn")
+ ok.String(*v.LocationArn)
+ }
+
+ if v.Subdirectory != nil {
+ ok := object.Key("Subdirectory")
+ ok.String(*v.Subdirectory)
+ }
+
+ return nil
+}
+
+func awsAwsjson11_serializeOpDocumentUpdateLocationFsxLustreInput(v *UpdateLocationFsxLustreInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.LocationArn != nil {
+ ok := object.Key("LocationArn")
+ ok.String(*v.LocationArn)
+ }
+
+ if v.Subdirectory != nil {
+ ok := object.Key("Subdirectory")
+ ok.String(*v.Subdirectory)
+ }
+
+ return nil
+}
+
+func awsAwsjson11_serializeOpDocumentUpdateLocationFsxOntapInput(v *UpdateLocationFsxOntapInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.LocationArn != nil {
+ ok := object.Key("LocationArn")
+ ok.String(*v.LocationArn)
+ }
+
+ if v.Protocol != nil {
+ ok := object.Key("Protocol")
+ if err := awsAwsjson11_serializeDocumentFsxUpdateProtocol(v.Protocol, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.Subdirectory != nil {
+ ok := object.Key("Subdirectory")
+ ok.String(*v.Subdirectory)
+ }
+
+ return nil
+}
+
+func awsAwsjson11_serializeOpDocumentUpdateLocationFsxOpenZfsInput(v *UpdateLocationFsxOpenZfsInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.LocationArn != nil {
+ ok := object.Key("LocationArn")
+ ok.String(*v.LocationArn)
+ }
+
+ if v.Protocol != nil {
+ ok := object.Key("Protocol")
+ if err := awsAwsjson11_serializeDocumentFsxProtocol(v.Protocol, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.Subdirectory != nil {
+ ok := object.Key("Subdirectory")
+ ok.String(*v.Subdirectory)
+ }
+
+ return nil
+}
+
+func awsAwsjson11_serializeOpDocumentUpdateLocationFsxWindowsInput(v *UpdateLocationFsxWindowsInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.Domain != nil {
+ ok := object.Key("Domain")
+ ok.String(*v.Domain)
+ }
+
+ if v.LocationArn != nil {
+ ok := object.Key("LocationArn")
+ ok.String(*v.LocationArn)
+ }
+
+ if v.Password != nil {
+ ok := object.Key("Password")
+ ok.String(*v.Password)
+ }
+
+ if v.Subdirectory != nil {
+ ok := object.Key("Subdirectory")
+ ok.String(*v.Subdirectory)
+ }
+
+ if v.User != nil {
+ ok := object.Key("User")
+ ok.String(*v.User)
+ }
+
+ return nil
+}
+
func awsAwsjson11_serializeOpDocumentUpdateLocationHdfsInput(v *UpdateLocationHdfsInput, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
@@ -6018,6 +6563,35 @@ func awsAwsjson11_serializeOpDocumentUpdateLocationObjectStorageInput(v *UpdateL
return nil
}
+func awsAwsjson11_serializeOpDocumentUpdateLocationS3Input(v *UpdateLocationS3Input, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.LocationArn != nil {
+ ok := object.Key("LocationArn")
+ ok.String(*v.LocationArn)
+ }
+
+ if v.S3Config != nil {
+ ok := object.Key("S3Config")
+ if err := awsAwsjson11_serializeDocumentS3Config(v.S3Config, ok); err != nil {
+ return err
+ }
+ }
+
+ if len(v.S3StorageClass) > 0 {
+ ok := object.Key("S3StorageClass")
+ ok.String(string(v.S3StorageClass))
+ }
+
+ if v.Subdirectory != nil {
+ ok := object.Key("Subdirectory")
+ ok.String(*v.Subdirectory)
+ }
+
+ return nil
+}
+
func awsAwsjson11_serializeOpDocumentUpdateLocationSmbInput(v *UpdateLocationSmbInput, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
diff --git a/service/datasync/snapshot/api_op_UpdateLocationEfs.go.snap b/service/datasync/snapshot/api_op_UpdateLocationEfs.go.snap
new file mode 100644
index 00000000000..5d2a076ca23
--- /dev/null
+++ b/service/datasync/snapshot/api_op_UpdateLocationEfs.go.snap
@@ -0,0 +1,41 @@
+UpdateLocationEfs
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/datasync/snapshot/api_op_UpdateLocationFsxLustre.go.snap b/service/datasync/snapshot/api_op_UpdateLocationFsxLustre.go.snap
new file mode 100644
index 00000000000..67818cdf6e2
--- /dev/null
+++ b/service/datasync/snapshot/api_op_UpdateLocationFsxLustre.go.snap
@@ -0,0 +1,41 @@
+UpdateLocationFsxLustre
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/datasync/snapshot/api_op_UpdateLocationFsxOntap.go.snap b/service/datasync/snapshot/api_op_UpdateLocationFsxOntap.go.snap
new file mode 100644
index 00000000000..5331b8f068b
--- /dev/null
+++ b/service/datasync/snapshot/api_op_UpdateLocationFsxOntap.go.snap
@@ -0,0 +1,41 @@
+UpdateLocationFsxOntap
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/datasync/snapshot/api_op_UpdateLocationFsxOpenZfs.go.snap b/service/datasync/snapshot/api_op_UpdateLocationFsxOpenZfs.go.snap
new file mode 100644
index 00000000000..f73de51cb3d
--- /dev/null
+++ b/service/datasync/snapshot/api_op_UpdateLocationFsxOpenZfs.go.snap
@@ -0,0 +1,41 @@
+UpdateLocationFsxOpenZfs
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/datasync/snapshot/api_op_UpdateLocationFsxWindows.go.snap b/service/datasync/snapshot/api_op_UpdateLocationFsxWindows.go.snap
new file mode 100644
index 00000000000..2bd8fd1418e
--- /dev/null
+++ b/service/datasync/snapshot/api_op_UpdateLocationFsxWindows.go.snap
@@ -0,0 +1,41 @@
+UpdateLocationFsxWindows
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/datasync/snapshot/api_op_UpdateLocationS3.go.snap b/service/datasync/snapshot/api_op_UpdateLocationS3.go.snap
new file mode 100644
index 00000000000..04acf231226
--- /dev/null
+++ b/service/datasync/snapshot/api_op_UpdateLocationS3.go.snap
@@ -0,0 +1,41 @@
+UpdateLocationS3
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/datasync/snapshot_test.go b/service/datasync/snapshot_test.go
index 06e010cb17c..4b2c00ec4a7 100644
--- a/service/datasync/snapshot_test.go
+++ b/service/datasync/snapshot_test.go
@@ -698,6 +698,66 @@ func TestCheckSnapshot_UpdateLocationAzureBlob(t *testing.T) {
}
}
+func TestCheckSnapshot_UpdateLocationEfs(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateLocationEfs(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "UpdateLocationEfs")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_UpdateLocationFsxLustre(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateLocationFsxLustre(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "UpdateLocationFsxLustre")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_UpdateLocationFsxOntap(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateLocationFsxOntap(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "UpdateLocationFsxOntap")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_UpdateLocationFsxOpenZfs(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateLocationFsxOpenZfs(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "UpdateLocationFsxOpenZfs")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestCheckSnapshot_UpdateLocationFsxWindows(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateLocationFsxWindows(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "UpdateLocationFsxWindows")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_UpdateLocationHdfs(t *testing.T) {
svc := New(Options{})
_, err := svc.UpdateLocationHdfs(context.Background(), nil, func(o *Options) {
@@ -734,6 +794,18 @@ func TestCheckSnapshot_UpdateLocationObjectStorage(t *testing.T) {
}
}
+func TestCheckSnapshot_UpdateLocationS3(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateLocationS3(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "UpdateLocationS3")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_UpdateLocationSmb(t *testing.T) {
svc := New(Options{})
_, err := svc.UpdateLocationSmb(context.Background(), nil, func(o *Options) {
@@ -1417,6 +1489,66 @@ func TestUpdateSnapshot_UpdateLocationAzureBlob(t *testing.T) {
}
}
+func TestUpdateSnapshot_UpdateLocationEfs(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateLocationEfs(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "UpdateLocationEfs")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_UpdateLocationFsxLustre(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateLocationFsxLustre(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "UpdateLocationFsxLustre")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_UpdateLocationFsxOntap(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateLocationFsxOntap(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "UpdateLocationFsxOntap")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_UpdateLocationFsxOpenZfs(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateLocationFsxOpenZfs(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "UpdateLocationFsxOpenZfs")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestUpdateSnapshot_UpdateLocationFsxWindows(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateLocationFsxWindows(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "UpdateLocationFsxWindows")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_UpdateLocationHdfs(t *testing.T) {
svc := New(Options{})
_, err := svc.UpdateLocationHdfs(context.Background(), nil, func(o *Options) {
@@ -1453,6 +1585,18 @@ func TestUpdateSnapshot_UpdateLocationObjectStorage(t *testing.T) {
}
}
+func TestUpdateSnapshot_UpdateLocationS3(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateLocationS3(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "UpdateLocationS3")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_UpdateLocationSmb(t *testing.T) {
svc := New(Options{})
_, err := svc.UpdateLocationSmb(context.Background(), nil, func(o *Options) {
diff --git a/service/datasync/types/types.go b/service/datasync/types/types.go
index 7f739da807b..3e79414d1a3 100644
--- a/service/datasync/types/types.go
+++ b/service/datasync/types/types.go
@@ -194,8 +194,8 @@ type FsxProtocol struct {
}
// Specifies the Network File System (NFS) protocol configuration that DataSync
-// uses to access your Amazon FSx for OpenZFS or Amazon FSx for NetApp ONTAP file
-// system.
+// uses to access your FSx for OpenZFS file system or FSx for ONTAP file system's
+// storage virtual machine (SVM).
type FsxProtocolNfs struct {
// Specifies how DataSync can access a location using the NFS protocol.
@@ -205,10 +205,10 @@ type FsxProtocolNfs struct {
}
// Specifies the Server Message Block (SMB) protocol configuration that DataSync
-// uses to access your Amazon FSx for NetApp ONTAP file system. For more
-// information, see [Accessing FSx for ONTAP file systems].
+// uses to access your Amazon FSx for NetApp ONTAP file system's storage virtual
+// machine (SVM). For more information, see [Providing DataSync access to FSx for ONTAP file systems].
//
-// [Accessing FSx for ONTAP file systems]: https://docs.aws.amazon.com/datasync/latest/userguide/create-ontap-location.html#create-ontap-location-access
+// [Providing DataSync access to FSx for ONTAP file systems]: https://docs.aws.amazon.com/datasync/latest/userguide/create-ontap-location.html#create-ontap-location-access
type FsxProtocolSmb struct {
// Specifies the password of a user who has permission to access your SVM.
@@ -227,17 +227,73 @@ type FsxProtocolSmb struct {
// This member is required.
User *string
- // Specifies the fully qualified domain name (FQDN) of the Microsoft Active
- // Directory that your storage virtual machine (SVM) belongs to.
+ // Specifies the name of the Windows domain that your storage virtual machine
+ // (SVM) belongs to.
//
// If you have multiple domains in your environment, configuring this setting
// makes sure that DataSync connects to the right SVM.
+ //
+ // If you have multiple Active Directory domains in your environment, configuring
+ // this parameter makes sure that DataSync connects to the right SVM.
+ Domain *string
+
+ // Specifies the version of the Server Message Block (SMB) protocol that DataSync
+ // uses to access an SMB file server.
+ MountOptions *SmbMountOptions
+
+ noSmithyDocumentSerde
+}
+
+// Specifies the data transfer protocol that DataSync uses to access your Amazon
+// FSx file system.
+//
+// You can't update the Network File System (NFS) protocol configuration for FSx
+// for ONTAP locations. DataSync currently only supports NFS version 3 with this
+// location type.
+type FsxUpdateProtocol struct {
+
+ // Specifies the Network File System (NFS) protocol configuration that DataSync
+ // uses to access your FSx for OpenZFS file system or FSx for ONTAP file system's
+ // storage virtual machine (SVM).
+ NFS *FsxProtocolNfs
+
+ // Specifies the Server Message Block (SMB) protocol configuration that DataSync
+ // uses to access your FSx for ONTAP file system's storage virtual machine (SVM).
+ SMB *FsxUpdateProtocolSmb
+
+ noSmithyDocumentSerde
+}
+
+// Specifies the Server Message Block (SMB) protocol configuration that DataSync
+// uses to access your Amazon FSx for NetApp ONTAP file system's storage virtual
+// machine (SVM). For more information, see [Providing DataSync access to FSx for ONTAP file systems].
+//
+// [Providing DataSync access to FSx for ONTAP file systems]: https://docs.aws.amazon.com/datasync/latest/userguide/create-ontap-location.html#create-ontap-location-access
+type FsxUpdateProtocolSmb struct {
+
+ // Specifies the name of the Windows domain that your storage virtual machine
+ // (SVM) belongs to.
+ //
+ // If you have multiple Active Directory domains in your environment, configuring
+ // this parameter makes sure that DataSync connects to the right SVM.
Domain *string
// Specifies the version of the Server Message Block (SMB) protocol that DataSync
// uses to access an SMB file server.
MountOptions *SmbMountOptions
+ // Specifies the password of a user who has permission to access your SVM.
+ Password *string
+
+ // Specifies a user that can mount and access the files, folders, and metadata in
+ // your SVM.
+ //
+ // For information about choosing a user with the right level of access for your
+ // transfer, see [Using the SMB protocol].
+ //
+ // [Using the SMB protocol]: https://docs.aws.amazon.com/datasync/latest/userguide/create-ontap-location.html#create-ontap-location-smb
+ User *string
+
noSmithyDocumentSerde
}
@@ -1143,9 +1199,9 @@ type ResourceMetrics struct {
// Specifies the Amazon Resource Name (ARN) of the Identity and Access Management
// (IAM) role that DataSync uses to access your S3 bucket.
//
-// For more information, see [Accessing S3 buckets].
+// For more information, see [Providing DataSync access to S3 buckets].
//
-// [Accessing S3 buckets]: https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html#create-s3-location-access
+// [Providing DataSync access to S3 buckets]: https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html#create-s3-location-access
type S3Config struct {
// Specifies the ARN of the IAM role that DataSync uses to access your S3 bucket.
diff --git a/service/datasync/validators.go b/service/datasync/validators.go
index 9e66480f7fc..53d0e584ed0 100644
--- a/service/datasync/validators.go
+++ b/service/datasync/validators.go
@@ -990,6 +990,106 @@ func (m *validateOpUpdateLocationAzureBlob) HandleInitialize(ctx context.Context
return next.HandleInitialize(ctx, in)
}
+type validateOpUpdateLocationEfs struct {
+}
+
+func (*validateOpUpdateLocationEfs) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpUpdateLocationEfs) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*UpdateLocationEfsInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpUpdateLocationEfsInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+type validateOpUpdateLocationFsxLustre struct {
+}
+
+func (*validateOpUpdateLocationFsxLustre) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpUpdateLocationFsxLustre) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*UpdateLocationFsxLustreInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpUpdateLocationFsxLustreInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+type validateOpUpdateLocationFsxOntap struct {
+}
+
+func (*validateOpUpdateLocationFsxOntap) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpUpdateLocationFsxOntap) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*UpdateLocationFsxOntapInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpUpdateLocationFsxOntapInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+type validateOpUpdateLocationFsxOpenZfs struct {
+}
+
+func (*validateOpUpdateLocationFsxOpenZfs) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpUpdateLocationFsxOpenZfs) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*UpdateLocationFsxOpenZfsInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpUpdateLocationFsxOpenZfsInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
+type validateOpUpdateLocationFsxWindows struct {
+}
+
+func (*validateOpUpdateLocationFsxWindows) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpUpdateLocationFsxWindows) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*UpdateLocationFsxWindowsInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpUpdateLocationFsxWindowsInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpUpdateLocationHdfs struct {
}
@@ -1050,6 +1150,26 @@ func (m *validateOpUpdateLocationObjectStorage) HandleInitialize(ctx context.Con
return next.HandleInitialize(ctx, in)
}
+type validateOpUpdateLocationS3 struct {
+}
+
+func (*validateOpUpdateLocationS3) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpUpdateLocationS3) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*UpdateLocationS3Input)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpUpdateLocationS3Input(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpUpdateLocationSmb struct {
}
@@ -1326,6 +1446,26 @@ func addOpUpdateLocationAzureBlobValidationMiddleware(stack *middleware.Stack) e
return stack.Initialize.Add(&validateOpUpdateLocationAzureBlob{}, middleware.After)
}
+func addOpUpdateLocationEfsValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpUpdateLocationEfs{}, middleware.After)
+}
+
+func addOpUpdateLocationFsxLustreValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpUpdateLocationFsxLustre{}, middleware.After)
+}
+
+func addOpUpdateLocationFsxOntapValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpUpdateLocationFsxOntap{}, middleware.After)
+}
+
+func addOpUpdateLocationFsxOpenZfsValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpUpdateLocationFsxOpenZfs{}, middleware.After)
+}
+
+func addOpUpdateLocationFsxWindowsValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpUpdateLocationFsxWindows{}, middleware.After)
+}
+
func addOpUpdateLocationHdfsValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpUpdateLocationHdfs{}, middleware.After)
}
@@ -1338,6 +1478,10 @@ func addOpUpdateLocationObjectStorageValidationMiddleware(stack *middleware.Stac
return stack.Initialize.Add(&validateOpUpdateLocationObjectStorage{}, middleware.After)
}
+func addOpUpdateLocationS3ValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpUpdateLocationS3{}, middleware.After)
+}
+
func addOpUpdateLocationSmbValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpUpdateLocationSmb{}, middleware.After)
}
@@ -2753,6 +2897,86 @@ func validateOpUpdateLocationAzureBlobInput(v *UpdateLocationAzureBlobInput) err
}
}
+func validateOpUpdateLocationEfsInput(v *UpdateLocationEfsInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "UpdateLocationEfsInput"}
+ if v.LocationArn == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("LocationArn"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpUpdateLocationFsxLustreInput(v *UpdateLocationFsxLustreInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "UpdateLocationFsxLustreInput"}
+ if v.LocationArn == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("LocationArn"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpUpdateLocationFsxOntapInput(v *UpdateLocationFsxOntapInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "UpdateLocationFsxOntapInput"}
+ if v.LocationArn == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("LocationArn"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpUpdateLocationFsxOpenZfsInput(v *UpdateLocationFsxOpenZfsInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "UpdateLocationFsxOpenZfsInput"}
+ if v.LocationArn == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("LocationArn"))
+ }
+ if v.Protocol != nil {
+ if err := validateFsxProtocol(v.Protocol); err != nil {
+ invalidParams.AddNested("Protocol", err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateOpUpdateLocationFsxWindowsInput(v *UpdateLocationFsxWindowsInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "UpdateLocationFsxWindowsInput"}
+ if v.LocationArn == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("LocationArn"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpUpdateLocationHdfsInput(v *UpdateLocationHdfsInput) error {
if v == nil {
return nil
@@ -2808,6 +3032,26 @@ func validateOpUpdateLocationObjectStorageInput(v *UpdateLocationObjectStorageIn
}
}
+func validateOpUpdateLocationS3Input(v *UpdateLocationS3Input) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "UpdateLocationS3Input"}
+ if v.LocationArn == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("LocationArn"))
+ }
+ if v.S3Config != nil {
+ if err := validateS3Config(v.S3Config); err != nil {
+ invalidParams.AddNested("S3Config", err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpUpdateLocationSmbInput(v *UpdateLocationSmbInput) error {
if v == nil {
return nil
diff --git a/service/dlm/CHANGELOG.md b/service/dlm/CHANGELOG.md
index 63b7e1618be..58b1039da58 100644
--- a/service/dlm/CHANGELOG.md
+++ b/service/dlm/CHANGELOG.md
@@ -1,3 +1,11 @@
+# v1.29.0 (2024-12-16)
+
+* **Feature**: This release adds support for Local Zones in Amazon Data Lifecycle Manager EBS snapshot lifecycle policies.
+
+# v1.28.10 (2024-12-13)
+
+* No change notes available for this release.
+
# v1.28.9 (2024-12-11)
* No change notes available for this release.
diff --git a/service/dlm/go_module_metadata.go b/service/dlm/go_module_metadata.go
index a5734242fa7..726d983ba36 100644
--- a/service/dlm/go_module_metadata.go
+++ b/service/dlm/go_module_metadata.go
@@ -3,4 +3,4 @@
package dlm
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.28.9"
+const goModuleVersion = "1.29.0"
diff --git a/service/dlm/internal/endpoints/endpoints.go b/service/dlm/internal/endpoints/endpoints.go
index 8573bdce5bd..542d339239c 100644
--- a/service/dlm/internal/endpoints/endpoints.go
+++ b/service/dlm/internal/endpoints/endpoints.go
@@ -514,21 +514,9 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "us-iso-east-1",
}: endpoints.Endpoint{},
- endpoints.EndpointKey{
- Region: "us-iso-east-1",
- Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
- }: {
- Hostname: "dlm-fips.us-iso-east-1.api.aws.ic.gov",
- },
endpoints.EndpointKey{
Region: "us-iso-west-1",
}: endpoints.Endpoint{},
- endpoints.EndpointKey{
- Region: "us-iso-west-1",
- Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
- }: {
- Hostname: "dlm-fips.us-iso-west-1.api.aws.ic.gov",
- },
},
},
{
@@ -555,12 +543,6 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "us-isob-east-1",
}: endpoints.Endpoint{},
- endpoints.EndpointKey{
- Region: "us-isob-east-1",
- Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
- }: {
- Hostname: "dlm-fips.us-isob-east-1.api.aws.scloud",
- },
},
},
{
@@ -643,12 +625,6 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "us-gov-east-1",
}: endpoints.Endpoint{},
- endpoints.EndpointKey{
- Region: "us-gov-east-1",
- Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
- }: {
- Hostname: "dlm-fips.us-gov-east-1.api.aws",
- },
endpoints.EndpointKey{
Region: "us-gov-east-1",
Variant: endpoints.FIPSVariant,
@@ -667,12 +643,6 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "us-gov-west-1",
}: endpoints.Endpoint{},
- endpoints.EndpointKey{
- Region: "us-gov-west-1",
- Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
- }: {
- Hostname: "dlm-fips.us-gov-west-1.api.aws",
- },
endpoints.EndpointKey{
Region: "us-gov-west-1",
Variant: endpoints.FIPSVariant,
diff --git a/service/dlm/types/enums.go b/service/dlm/types/enums.go
index 3ee1d5b129c..6df66b606ee 100644
--- a/service/dlm/types/enums.go
+++ b/service/dlm/types/enums.go
@@ -138,6 +138,7 @@ type LocationValues string
const (
LocationValuesCloud LocationValues = "CLOUD"
LocationValuesOutpostLocal LocationValues = "OUTPOST_LOCAL"
+ LocationValuesLocalZone LocationValues = "LOCAL_ZONE"
)
// Values returns all known values for LocationValues. Note that this can be
@@ -148,6 +149,7 @@ func (LocationValues) Values() []LocationValues {
return []LocationValues{
"CLOUD",
"OUTPOST_LOCAL",
+ "LOCAL_ZONE",
}
}
@@ -195,8 +197,9 @@ type ResourceLocationValues string
// Enum values for ResourceLocationValues
const (
- ResourceLocationValuesCloud ResourceLocationValues = "CLOUD"
- ResourceLocationValuesOutpost ResourceLocationValues = "OUTPOST"
+ ResourceLocationValuesCloud ResourceLocationValues = "CLOUD"
+ ResourceLocationValuesOutpost ResourceLocationValues = "OUTPOST"
+ ResourceLocationValuesLocalZone ResourceLocationValues = "LOCAL_ZONE"
)
// Values returns all known values for ResourceLocationValues. Note that this can
@@ -207,6 +210,7 @@ func (ResourceLocationValues) Values() []ResourceLocationValues {
return []ResourceLocationValues{
"CLOUD",
"OUTPOST",
+ "LOCAL_ZONE",
}
}
diff --git a/service/dlm/types/types.go b/service/dlm/types/types.go
index 06bfcb5ba02..8f19e2d5437 100644
--- a/service/dlm/types/types.go
+++ b/service/dlm/types/types.go
@@ -66,9 +66,10 @@ type ArchiveRule struct {
type CreateRule struct {
// The schedule, as a Cron expression. The schedule interval must be between 1
- // hour and 1 year. For more information, see [Cron expressions]in the Amazon CloudWatch User Guide.
+ // hour and 1 year. For more information, see the [Cron expressions reference]in the Amazon EventBridge User
+ // Guide.
//
- // [Cron expressions]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html#CronExpressions
+ // [Cron expressions reference]: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-cron-expressions.html
CronExpression *string
// The interval between snapshots. The supported values are 1, 2, 3, 4, 6, 8, 12,
@@ -79,15 +80,30 @@ type CreateRule struct {
IntervalUnit IntervalUnitValues
// [Custom snapshot policies only] Specifies the destination for snapshots
- // created by the policy. To create snapshots in the same Region as the source
- // resource, specify CLOUD . To create snapshots on the same Outpost as the source
- // resource, specify OUTPOST_LOCAL . If you omit this parameter, CLOUD is used by
- // default.
- //
- // If the policy targets resources in an Amazon Web Services Region, then you must
- // create snapshots in the same Region as the source resource. If the policy
- // targets resources on an Outpost, then you can create snapshots on the same
- // Outpost as the source resource, or in the Region of that Outpost.
+ // created by the policy. The allowed destinations depend on the location of the
+ // targeted resources.
+ //
+ // - If the policy targets resources in a Region, then you must create snapshots
+ // in the same Region as the source resource.
+ //
+ // - If the policy targets resources in a Local Zone, you can create snapshots
+ // in the same Local Zone or in its parent Region.
+ //
+ // - If the policy targets resources on an Outpost, then you can create
+ // snapshots on the same Outpost or in its parent Region.
+ //
+ // Specify one of the following values:
+ //
+ // - To create snapshots in the same Region as the source resource, specify CLOUD
+ // .
+ //
+ // - To create snapshots in the same Local Zone as the source resource, specify
+ // LOCAL_ZONE .
+ //
+ // - To create snapshots on the same Outpost as the source resource, specify
+ // OUTPOST_LOCAL .
+ //
+ // Default: CLOUD
Location LocationValues
// [Custom snapshot policies that target instances only] Specifies pre and/or
@@ -376,9 +392,7 @@ type FastRestoreRule struct {
noSmithyDocumentSerde
}
-// [Custom policies only] Detailed information about a snapshot, AMI, or
-//
-// event-based lifecycle policy.
+// Information about a lifecycle policy.
type LifecyclePolicy struct {
// The local date and time when the lifecycle policy was created.
@@ -387,11 +401,12 @@ type LifecyclePolicy struct {
// The local date and time when the lifecycle policy was last modified.
DateModified *time.Time
- // [Default policies only] The type of default policy. Values include:
+ // Indicates whether the policy is a default lifecycle policy or a custom
+ // lifecycle policy.
//
- // - VOLUME - Default policy for EBS snapshots
+ // - true - the policy is a default policy.
//
- // - INSTANCE - Default policy for EBS-backed AMIs
+ // - false - the policy is a custom policy.
DefaultPolicy *bool
// The description of the lifecycle policy.
@@ -568,24 +583,31 @@ type PolicyDetails struct {
// - STANDARD To create a custom policy.
PolicyLanguage PolicyLanguageValues
- // [Custom policies only] The valid target resource types and actions a policy
- // can manage. Specify EBS_SNAPSHOT_MANAGEMENT to create a lifecycle policy that
- // manages the lifecycle of Amazon EBS snapshots. Specify IMAGE_MANAGEMENT to
- // create a lifecycle policy that manages the lifecycle of EBS-backed AMIs. Specify
- // EVENT_BASED_POLICY to create an event-based policy that performs specific
- // actions when a defined event occurs in your Amazon Web Services account.
+ // The type of policy. Specify EBS_SNAPSHOT_MANAGEMENT to create a lifecycle
+ // policy that manages the lifecycle of Amazon EBS snapshots. Specify
+ // IMAGE_MANAGEMENT to create a lifecycle policy that manages the lifecycle of
+ // EBS-backed AMIs. Specify EVENT_BASED_POLICY to create an event-based policy
+ // that performs specific actions when a defined event occurs in your Amazon Web
+ // Services account.
//
// The default is EBS_SNAPSHOT_MANAGEMENT .
PolicyType PolicyTypeValues
// [Custom snapshot and AMI policies only] The location of the resources to
- // backup. If the source resources are located in an Amazon Web Services Region,
- // specify CLOUD . If the source resources are located on an Outpost in your
- // account, specify OUTPOST .
+ // backup.
+ //
+ // - If the source resources are located in a Region, specify CLOUD . In this
+ // case, the policy targets all resources of the specified type with matching
+ // target tags across all Availability Zones in the Region.
+ //
+ // - [Custom snapshot policies only] If the source resources are located in a
+ // Local Zone, specify LOCAL_ZONE . In this case, the policy targets all
+ // resources of the specified type with matching target tags across all Local Zones
+ // in the Region.
//
- // If you specify OUTPOST , Amazon Data Lifecycle Manager backs up all resources of
- // the specified type with matching target tags across all of the Outposts in your
- // account.
+ // - If the source resources are located on an Outpost in your account, specify
+ // OUTPOST . In this case, the policy targets all resources of the specified type
+ // with matching target tags across all of the Outposts in your account.
ResourceLocations []ResourceLocationValues
// [Default policies only] Specify the type of default policy to create.
@@ -741,11 +763,11 @@ type Schedule struct {
// The creation rule.
CreateRule *CreateRule
- // Specifies a rule for copying snapshots or AMIs across regions.
+ // Specifies a rule for copying snapshots or AMIs across Regions.
//
// You can't specify cross-Region copy rules for policies that create snapshots on
- // an Outpost. If the policy creates snapshots in a Region, then snapshots can be
- // copied to up to three Regions or Outposts.
+ // an Outpost or in a Local Zone. If the policy creates snapshots in a Region, then
+ // snapshots can be copied to up to three Regions or Outposts.
CrossRegionCopyRules []CrossRegionCopyRule
// [Custom AMI policies only] The AMI deprecation rule for the schedule.
diff --git a/service/ec2/CHANGELOG.md b/service/ec2/CHANGELOG.md
index b2af84774c7..a989b3434bc 100644
--- a/service/ec2/CHANGELOG.md
+++ b/service/ec2/CHANGELOG.md
@@ -1,3 +1,11 @@
+# v1.198.0 (2024-12-16)
+
+* **Feature**: This release adds support for EBS local snapshots in AWS Dedicated Local Zones, which allows you to store snapshots of EBS volumes locally in Dedicated Local Zones.
+
+# v1.197.0 (2024-12-13)
+
+* **Feature**: This release adds GroupId to the response for DeleteSecurityGroup.
+
# v1.196.0 (2024-12-09)
* **Feature**: This release includes a new API for modifying instance network-performance-options after launch.
diff --git a/service/ec2/api_op_CreateSnapshot.go b/service/ec2/api_op_CreateSnapshot.go
index b5a5eb697e8..e1a59604f33 100644
--- a/service/ec2/api_op_CreateSnapshot.go
+++ b/service/ec2/api_op_CreateSnapshot.go
@@ -16,11 +16,17 @@ import (
// snapshots for backups, to make copies of EBS volumes, and to save data before
// shutting down an instance.
//
-// You can create snapshots of volumes in a Region and volumes on an Outpost. If
-// you create a snapshot of a volume in a Region, the snapshot must be stored in
-// the same Region as the volume. If you create a snapshot of a volume on an
-// Outpost, the snapshot can be stored on the same Outpost as the volume, or in the
-// Region for that Outpost.
+// The location of the source EBS volume determines where you can create the
+// snapshot.
+//
+// - If the source volume is in a Region, you must create the snapshot in the
+// same Region as the volume.
+//
+// - If the source volume is in a Local Zone, you can create the snapshot in the
+// same Local Zone or in parent Amazon Web Services Region.
+//
+// - If the source volume is on an Outpost, you can create the snapshot on the
+// same Outpost or in its parent Amazon Web Services Region.
//
// When a snapshot is created, any Amazon Web Services Marketplace product codes
// that are associated with the source volume are propagated to the snapshot.
@@ -41,16 +47,9 @@ import (
// Snapshots that are taken from encrypted volumes are automatically encrypted.
// Volumes that are created from encrypted snapshots are also automatically
// encrypted. Your encrypted volumes and any associated snapshots always remain
-// protected.
-//
-// You can tag your snapshots during creation. For more information, see [Tag your Amazon EC2 resources] in the
-// Amazon EC2 User Guide.
-//
-// For more information, see [Amazon EBS] and [Amazon EBS encryption] in the Amazon EBS User Guide.
+// protected. For more information, [Amazon EBS encryption]in the Amazon EBS User Guide.
//
-// [Amazon EBS]: https://docs.aws.amazon.com/ebs/latest/userguide/what-is-ebs.html
// [Amazon EBS encryption]: https://docs.aws.amazon.com/ebs/latest/userguide/ebs-encryption.html
-// [Tag your Amazon EC2 resources]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html
func (c *Client) CreateSnapshot(ctx context.Context, params *CreateSnapshotInput, optFns ...func(*Options)) (*CreateSnapshotOutput, error) {
if params == nil {
params = &CreateSnapshotInput{}
@@ -82,19 +81,27 @@ type CreateSnapshotInput struct {
// UnauthorizedOperation .
DryRun *bool
- // The Amazon Resource Name (ARN) of the Outpost on which to create a local
- // snapshot.
+ // Only supported for volumes in Local Zones. If the source volume is not in a
+ // Local Zone, omit this parameter.
//
- // - To create a snapshot of a volume in a Region, omit this parameter. The
- // snapshot is created in the same Region as the volume.
+ // - To create a local snapshot in the same Local Zone as the source volume,
+ // specify local .
//
- // - To create a snapshot of a volume on an Outpost and store the snapshot in
- // the Region, omit this parameter. The snapshot is created in the Region for the
- // Outpost.
+ // - To create a regional snapshot in the parent Region of the Local Zone,
+ // specify regional or omit this parameter.
//
- // - To create a snapshot of a volume on an Outpost and store the snapshot on an
- // Outpost, specify the ARN of the destination Outpost. The snapshot must be
- // created on the same Outpost as the volume.
+ // Default value: regional
+ Location types.SnapshotLocationEnum
+
+ // Only supported for volumes on Outposts. If the source volume is not on an
+ // Outpost, omit this parameter.
+ //
+ // - To create the snapshot on the same Outpost as the source volume, specify
+ // the ARN of that Outpost. The snapshot must be created on the same Outpost as the
+ // volume.
+ //
+ // - To create the snapshot in the parent Region of the Outpost, omit this
+ // parameter.
//
// For more information, see [Create local snapshots from volumes on an Outpost] in the Amazon EBS User Guide.
//
@@ -110,6 +117,10 @@ type CreateSnapshotInput struct {
// Describes a snapshot.
type CreateSnapshotOutput struct {
+ // The Availability Zone or Local Zone of the snapshot. For example, us-west-1a
+ // (Availability Zone) or us-west-2-lax-1a (Local Zone).
+ AvailabilityZone *string
+
// Only for snapshot copies created with time-based snapshot copy operations.
//
// The completion duration requested for the time-based snapshot copy operation.
diff --git a/service/ec2/api_op_CreateSnapshots.go b/service/ec2/api_op_CreateSnapshots.go
index c8f6c0fa9e0..02db596189a 100644
--- a/service/ec2/api_op_CreateSnapshots.go
+++ b/service/ec2/api_op_CreateSnapshots.go
@@ -11,19 +11,24 @@ import (
smithyhttp "github.com/aws/smithy-go/transport/http"
)
-// Creates crash-consistent snapshots of multiple EBS volumes and stores the data
-// in S3. Volumes are chosen by specifying an instance. Any attached volumes will
-// produce one snapshot each that is crash-consistent across the instance.
+// Creates crash-consistent snapshots of multiple EBS volumes attached to an
+// Amazon EC2 instance. Volumes are chosen by specifying an instance. Each volume
+// attached to the specified instance will produce one snapshot that is
+// crash-consistent across the instance. You can include all of the volumes
+// currently attached to the instance, or you can exclude the root volume or
+// specific data (non-root) volumes from the multi-volume snapshot set.
//
-// You can include all of the volumes currently attached to the instance, or you
-// can exclude the root volume or specific data (non-root) volumes from the
-// multi-volume snapshot set.
+// The location of the source instance determines where you can create the
+// snapshots.
//
-// You can create multi-volume snapshots of instances in a Region and instances on
-// an Outpost. If you create snapshots from an instance in a Region, the snapshots
-// must be stored in the same Region as the instance. If you create snapshots from
-// an instance on an Outpost, the snapshots can be stored on the same Outpost as
-// the instance, or in the Region for that Outpost.
+// - If the source instance is in a Region, you must create the snapshots in the
+// same Region as the instance.
+//
+// - If the source instance is in a Local Zone, you can create the snapshots in
+// the same Local Zone or in parent Amazon Web Services Region.
+//
+// - If the source instance is on an Outpost, you can create the snapshots on
+// the same Outpost or in its parent Amazon Web Services Region.
func (c *Client) CreateSnapshots(ctx context.Context, params *CreateSnapshotsInput, optFns ...func(*Options)) (*CreateSnapshotsOutput, error) {
if params == nil {
params = &CreateSnapshotsInput{}
@@ -58,23 +63,31 @@ type CreateSnapshotsInput struct {
// UnauthorizedOperation .
DryRun *bool
- // The Amazon Resource Name (ARN) of the Outpost on which to create the local
- // snapshots.
+ // Only supported for instances in Local Zones. If the source instance is not in a
+ // Local Zone, omit this parameter.
+ //
+ // - To create local snapshots in the same Local Zone as the source instance,
+ // specify local .
//
- // - To create snapshots from an instance in a Region, omit this parameter. The
- // snapshots are created in the same Region as the instance.
+ // - To create a regional snapshots in the parent Region of the Local Zone,
+ // specify regional or omit this parameter.
+ //
+ // Default value: regional
+ Location types.SnapshotLocationEnum
+
+ // Only supported for instances on Outposts. If the source instance is not on an
+ // Outpost, omit this parameter.
//
- // - To create snapshots from an instance on an Outpost and store the snapshots
- // in the Region, omit this parameter. The snapshots are created in the Region for
- // the Outpost.
+ // - To create the snapshots on the same Outpost as the source instance, specify
+ // the ARN of that Outpost. The snapshots must be created on the same Outpost as
+ // the instance.
//
- // - To create snapshots from an instance on an Outpost and store the snapshots
- // on an Outpost, specify the ARN of the destination Outpost. The snapshots must be
- // created on the same Outpost as the instance.
+ // - To create the snapshots in the parent Region of the Outpost, omit this
+ // parameter.
//
- // For more information, see [Create multi-volume local snapshots from instances on an Outpost] in the Amazon EBS User Guide.
+ // For more information, see [Create local snapshots from volumes on an Outpost] in the Amazon EBS User Guide.
//
- // [Create multi-volume local snapshots from instances on an Outpost]: https://docs.aws.amazon.com/ebs/latest/userguide/snapshots-outposts.html#create-multivol-snapshot
+ // [Create local snapshots from volumes on an Outpost]: https://docs.aws.amazon.com/ebs/latest/userguide/snapshots-outposts.html#create-snapshot
OutpostArn *string
// Tags to apply to every snapshot specified by the instance.
diff --git a/service/ec2/api_op_DeleteSecurityGroup.go b/service/ec2/api_op_DeleteSecurityGroup.go
index 2515cb81f73..80d5cc78365 100644
--- a/service/ec2/api_op_DeleteSecurityGroup.go
+++ b/service/ec2/api_op_DeleteSecurityGroup.go
@@ -50,6 +50,13 @@ type DeleteSecurityGroupInput struct {
}
type DeleteSecurityGroupOutput struct {
+
+ // The ID of the deleted security group.
+ GroupId *string
+
+ // Returns true if the request succeeds; otherwise, returns an error.
+ Return *bool
+
// Metadata pertaining to the operation's result.
ResultMetadata middleware.Metadata
diff --git a/service/ec2/api_op_StartDeclarativePoliciesReport.go b/service/ec2/api_op_StartDeclarativePoliciesReport.go
index 9448c06499d..f2ccf8541dd 100644
--- a/service/ec2/api_op_StartDeclarativePoliciesReport.go
+++ b/service/ec2/api_op_StartDeclarativePoliciesReport.go
@@ -31,7 +31,8 @@ import (
// account or delegated administrators for the organization.
//
// - An S3 bucket must be available before generating the report (you can create
-// a new one or use an existing one), and it must have an appropriate bucket
+// a new one or use an existing one), it must be in the same Region where the
+// report generation request is made, and it must have an appropriate bucket
// policy. For a sample S3 policy, see Sample Amazon S3 policy under .
//
// - Trusted access must be enabled for the service for which the declarative
@@ -67,7 +68,8 @@ func (c *Client) StartDeclarativePoliciesReport(ctx context.Context, params *Sta
type StartDeclarativePoliciesReportInput struct {
- // The name of the S3 bucket where the report will be saved.
+ // The name of the S3 bucket where the report will be saved. The bucket must be in
+ // the same Region where the report generation request is made.
//
// This member is required.
S3Bucket *string
diff --git a/service/ec2/deserializers.go b/service/ec2/deserializers.go
index 51ce478d97e..9c0525646e5 100644
--- a/service/ec2/deserializers.go
+++ b/service/ec2/deserializers.go
@@ -17271,10 +17271,33 @@ func (m *awsEc2query_deserializeOpDeleteSecurityGroup) HandleDeserialize(ctx con
output := &DeleteSecurityGroupOutput{}
out.Result = output
- if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+ body := io.TeeReader(response.Body, ringBuffer)
+ rootDecoder := xml.NewDecoder(body)
+ t, err := smithyxml.FetchRootElement(rootDecoder)
+ if err == io.EOF {
+ return out, metadata, nil
+ }
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
return out, metadata, &smithy.DeserializationError{
- Err: fmt.Errorf("failed to discard response body, %w", err),
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+ err = awsEc2query_deserializeOpDocumentDeleteSecurityGroupOutput(&output, decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
}
+ return out, metadata, err
}
return out, metadata, err
@@ -127826,6 +127849,19 @@ func awsEc2query_deserializeDocumentSnapshot(v **types.Snapshot, decoder smithyx
originalDecoder := decoder
decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
switch {
+ case strings.EqualFold("availabilityZone", t.Name.Local):
+ val, err := decoder.Value()
+ if err != nil {
+ return err
+ }
+ if val == nil {
+ break
+ }
+ {
+ xtv := string(val)
+ sv.AvailabilityZone = ptr.String(xtv)
+ }
+
case strings.EqualFold("completionDurationMinutes", t.Name.Local):
val, err := decoder.Value()
if err != nil {
@@ -128382,6 +128418,19 @@ func awsEc2query_deserializeDocumentSnapshotInfo(v **types.SnapshotInfo, decoder
originalDecoder := decoder
decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
switch {
+ case strings.EqualFold("availabilityZone", t.Name.Local):
+ val, err := decoder.Value()
+ if err != nil {
+ return err
+ }
+ if val == nil {
+ break
+ }
+ {
+ xtv := string(val)
+ sv.AvailabilityZone = ptr.String(xtv)
+ }
+
case strings.EqualFold("description", t.Name.Local):
val, err := decoder.Value()
if err != nil {
@@ -157421,6 +157470,19 @@ func awsEc2query_deserializeOpDocumentCreateSnapshotOutput(v **CreateSnapshotOut
originalDecoder := decoder
decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
switch {
+ case strings.EqualFold("availabilityZone", t.Name.Local):
+ val, err := decoder.Value()
+ if err != nil {
+ return err
+ }
+ if val == nil {
+ break
+ }
+ {
+ xtv := string(val)
+ sv.AvailabilityZone = ptr.String(xtv)
+ }
+
case strings.EqualFold("completionDurationMinutes", t.Name.Local):
val, err := decoder.Value()
if err != nil {
@@ -160900,6 +160962,71 @@ func awsEc2query_deserializeOpDocumentDeleteQueuedReservedInstancesOutput(v **De
return nil
}
+func awsEc2query_deserializeOpDocumentDeleteSecurityGroupOutput(v **DeleteSecurityGroupOutput, decoder smithyxml.NodeDecoder) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ var sv *DeleteSecurityGroupOutput
+ if *v == nil {
+ sv = &DeleteSecurityGroupOutput{}
+ } else {
+ sv = *v
+ }
+
+ for {
+ t, done, err := decoder.Token()
+ if err != nil {
+ return err
+ }
+ if done {
+ break
+ }
+ originalDecoder := decoder
+ decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+ switch {
+ case strings.EqualFold("groupId", t.Name.Local):
+ val, err := decoder.Value()
+ if err != nil {
+ return err
+ }
+ if val == nil {
+ break
+ }
+ {
+ xtv := string(val)
+ sv.GroupId = ptr.String(xtv)
+ }
+
+ case strings.EqualFold("return", t.Name.Local):
+ val, err := decoder.Value()
+ if err != nil {
+ return err
+ }
+ if val == nil {
+ break
+ }
+ {
+ xtv, err := strconv.ParseBool(string(val))
+ if err != nil {
+ return fmt.Errorf("expected Boolean to be of type *bool, got %T instead", val)
+ }
+ sv.Return = ptr.Bool(xtv)
+ }
+
+ default:
+ // Do nothing and ignore the unexpected tag element
+ err = decoder.Decoder.Skip()
+ if err != nil {
+ return err
+ }
+
+ }
+ decoder = originalDecoder
+ }
+ *v = sv
+ return nil
+}
+
func awsEc2query_deserializeOpDocumentDeleteSubnetCidrReservationOutput(v **DeleteSubnetCidrReservationOutput, decoder smithyxml.NodeDecoder) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
diff --git a/service/ec2/go_module_metadata.go b/service/ec2/go_module_metadata.go
index cf182b2779f..c41c45afaac 100644
--- a/service/ec2/go_module_metadata.go
+++ b/service/ec2/go_module_metadata.go
@@ -3,4 +3,4 @@
package ec2
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.196.0"
+const goModuleVersion = "1.198.0"
diff --git a/service/ec2/serializers.go b/service/ec2/serializers.go
index 34ff494bb5d..b57f10c9078 100644
--- a/service/ec2/serializers.go
+++ b/service/ec2/serializers.go
@@ -60530,6 +60530,11 @@ func awsEc2query_serializeOpDocumentCreateSnapshotInput(v *CreateSnapshotInput,
objectKey.Boolean(*v.DryRun)
}
+ if len(v.Location) > 0 {
+ objectKey := object.Key("Location")
+ objectKey.String(string(v.Location))
+ }
+
if v.OutpostArn != nil {
objectKey := object.Key("OutpostArn")
objectKey.String(*v.OutpostArn)
@@ -60576,6 +60581,11 @@ func awsEc2query_serializeOpDocumentCreateSnapshotsInput(v *CreateSnapshotsInput
}
}
+ if len(v.Location) > 0 {
+ objectKey := object.Key("Location")
+ objectKey.String(string(v.Location))
+ }
+
if v.OutpostArn != nil {
objectKey := object.Key("OutpostArn")
objectKey.String(*v.OutpostArn)
diff --git a/service/ec2/types/enums.go b/service/ec2/types/enums.go
index 1ede60e15c5..4d926f1f8d7 100644
--- a/service/ec2/types/enums.go
+++ b/service/ec2/types/enums.go
@@ -7963,6 +7963,25 @@ func (SnapshotBlockPublicAccessState) Values() []SnapshotBlockPublicAccessState
}
}
+type SnapshotLocationEnum string
+
+// Enum values for SnapshotLocationEnum
+const (
+ SnapshotLocationEnumRegional SnapshotLocationEnum = "regional"
+ SnapshotLocationEnumLocal SnapshotLocationEnum = "local"
+)
+
+// Values returns all known values for SnapshotLocationEnum. Note that this can be
+// expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (SnapshotLocationEnum) Values() []SnapshotLocationEnum {
+ return []SnapshotLocationEnum{
+ "regional",
+ "local",
+ }
+}
+
type SnapshotState string
// Enum values for SnapshotState
diff --git a/service/ec2/types/types.go b/service/ec2/types/types.go
index 2a262e00819..7ea991bc3ba 100644
--- a/service/ec2/types/types.go
+++ b/service/ec2/types/types.go
@@ -4944,6 +4944,10 @@ type FederatedAuthenticationRequest struct {
//
// If you specify multiple filters, the filters are joined with an AND , and the
// request returns only results that match all of the specified filters.
+//
+// For more information, see [List and filter using the CLI and API] in the Amazon EC2 User Guide.
+//
+// [List and filter using the CLI and API]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Filtering.html#Filtering_Resources_CLI
type Filter struct {
// The name of the filter. Filter names are case-sensitive.
@@ -16774,6 +16778,10 @@ type SlotStartTimeRangeRequest struct {
// Describes a snapshot.
type Snapshot struct {
+ // The Availability Zone or Local Zone of the snapshot. For example, us-west-1a
+ // (Availability Zone) or us-west-2-lax-1a (Local Zone).
+ AvailabilityZone *string
+
// Only for snapshot copies created with time-based snapshot copy operations.
//
// The completion duration requested for the time-based snapshot copy operation.
@@ -16933,6 +16941,10 @@ type SnapshotDiskContainer struct {
// Information about a snapshot.
type SnapshotInfo struct {
+ // The Availability Zone or Local Zone of the snapshots. For example, us-west-1a
+ // (Availability Zone) or us-west-2-lax-1a (Local Zone).
+ AvailabilityZone *string
+
// Description specified by the CreateSnapshotRequest that has been applied to all
// snapshots.
Description *string
diff --git a/service/ecs/CHANGELOG.md b/service/ecs/CHANGELOG.md
index d06c483e8b7..bf774e97bf7 100644
--- a/service/ecs/CHANGELOG.md
+++ b/service/ecs/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.53.0 (2024-12-17)
+
+* **Feature**: Added support for enableFaultInjection task definition parameter which can be used to enable Fault Injection feature on ECS tasks.
+
# v1.52.2 (2024-12-09)
* **Documentation**: This is a documentation only update to address various tickets for Amazon ECS.
diff --git a/service/ecs/api_op_ListServiceDeployments.go b/service/ecs/api_op_ListServiceDeployments.go
index 34c355b3e6e..3e7d782a61a 100644
--- a/service/ecs/api_op_ListServiceDeployments.go
+++ b/service/ecs/api_op_ListServiceDeployments.go
@@ -46,8 +46,8 @@ type ListServiceDeploymentsInput struct {
// The cluster that hosts the service. This can either be the cluster name or ARN.
// Starting April 15, 2023, Amazon Web Services will not onboard new customers to
// Amazon Elastic Inference (EI), and will help current customers migrate their
- // workloads to options that offer better price and performanceIf you don't specify
- // a cluster, default is used.
+ // workloads to options that offer better price and performance. If you don't
+ // specify a cluster, default is used.
Cluster *string
// An optional filter you can use to narrow the results by the service creation
diff --git a/service/ecs/api_op_RegisterTaskDefinition.go b/service/ecs/api_op_RegisterTaskDefinition.go
index 8cc12ab8203..650128c542a 100644
--- a/service/ecs/api_op_RegisterTaskDefinition.go
+++ b/service/ecs/api_op_RegisterTaskDefinition.go
@@ -107,6 +107,11 @@ type RegisterTaskDefinitionInput struct {
// This option requires Linux platform 1.4.0 or later.
Cpu *string
+ // Enables fault injection when you register your task definition and allows for
+ // fault injection requests to be accepted from the task's containers. The default
+ // value is false .
+ EnableFaultInjection *bool
+
// The amount of ephemeral storage to allocate for the task. This parameter is
// used to expand the total amount of ephemeral storage available, beyond the
// default amount, for tasks hosted on Fargate. For more information, see [Using data volumes in tasks]in the
diff --git a/service/ecs/deserializers.go b/service/ecs/deserializers.go
index ef70d94b23b..3dccf3be622 100644
--- a/service/ecs/deserializers.go
+++ b/service/ecs/deserializers.go
@@ -18283,6 +18283,15 @@ func awsAwsjson11_deserializeDocumentTaskDefinition(v **types.TaskDefinition, va
}
}
+ case "enableFaultInjection":
+ if value != nil {
+ jtv, ok := value.(bool)
+ if !ok {
+ return fmt.Errorf("expected BoxedBoolean to be of type *bool, got %T instead", value)
+ }
+ sv.EnableFaultInjection = ptr.Bool(jtv)
+ }
+
case "ephemeralStorage":
if err := awsAwsjson11_deserializeDocumentEphemeralStorage(&sv.EphemeralStorage, value); err != nil {
return err
diff --git a/service/ecs/go_module_metadata.go b/service/ecs/go_module_metadata.go
index 7bd536a7418..18db0244225 100644
--- a/service/ecs/go_module_metadata.go
+++ b/service/ecs/go_module_metadata.go
@@ -3,4 +3,4 @@
package ecs
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.52.2"
+const goModuleVersion = "1.53.0"
diff --git a/service/ecs/serializers.go b/service/ecs/serializers.go
index 3fd9edcc8d1..2784be3c59d 100644
--- a/service/ecs/serializers.go
+++ b/service/ecs/serializers.go
@@ -7806,6 +7806,11 @@ func awsAwsjson11_serializeOpDocumentRegisterTaskDefinitionInput(v *RegisterTask
ok.String(*v.Cpu)
}
+ if v.EnableFaultInjection != nil {
+ ok := object.Key("enableFaultInjection")
+ ok.Boolean(*v.EnableFaultInjection)
+ }
+
if v.EphemeralStorage != nil {
ok := object.Key("ephemeralStorage")
if err := awsAwsjson11_serializeDocumentEphemeralStorage(v.EphemeralStorage, ok); err != nil {
diff --git a/service/ecs/types/types.go b/service/ecs/types/types.go
index ef2d83dab9a..21fb301b590 100644
--- a/service/ecs/types/types.go
+++ b/service/ecs/types/types.go
@@ -1882,8 +1882,10 @@ type DeploymentConfiguration struct {
// the blue/green ( CODE_DEPLOY ) or EXTERNAL deployment types and has tasks that
// use the EC2 launch type.
//
- // If the tasks in the service use the Fargate launch type, the maximum percent
- // value is not used, although it is returned when describing your service.
+ // If the service uses either the blue/green ( CODE_DEPLOY ) or EXTERNAL
+ // deployment types, and the tasks in the service use the Fargate launch type, the
+ // maximum percent value is not used. The value is still returned when describing
+ // your service.
//
// [Amazon ECS services]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html
MaximumPercent *int32
@@ -3036,7 +3038,8 @@ type LogConfiguration struct {
//
// When you export logs to Amazon OpenSearch Service, you can specify options like
// Name , Host (OpenSearch Service endpoint without protocol), Port , Index , Type
- // , Aws_auth , Aws_region , Suppress_Type_Name , and tls .
+ // , Aws_auth , Aws_region , Suppress_Type_Name , and tls . For more information,
+ // see [Under the hood: FireLens for Amazon ECS Tasks].
//
// When you export logs to Amazon S3, you can specify the bucket using the bucket
// option. You can also specify region , total_file_size , upload_timeout , and
@@ -3048,6 +3051,7 @@ type LogConfiguration struct {
// command: sudo docker version --format '{{.Server.APIVersion}}'
//
// [awslogs-multiline-pattern]: https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern
+ // [Under the hood: FireLens for Amazon ECS Tasks]: http://aws.amazon.com/blogs/containers/under-the-hood-firelens-for-amazon-ecs-tasks/
// [awslogs-datetime-format]: https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format
// [Preventing log loss with non-blocking mode in the awslogs container log driver]: http://aws.amazon.com/blogs/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/
Options map[string]string
@@ -5167,6 +5171,10 @@ type TaskDefinition struct {
// The Unix timestamp for the time when the task definition was deregistered.
DeregisteredAt *time.Time
+ // Enables fault injection and allows for fault injection requests to be accepted
+ // from the task's containers. The default value is false .
+ EnableFaultInjection *bool
+
// The ephemeral storage settings to use for tasks run with the task definition.
EphemeralStorage *EphemeralStorage
diff --git a/service/eks/CHANGELOG.md b/service/eks/CHANGELOG.md
index 3f897d9cd32..3b554636980 100644
--- a/service/eks/CHANGELOG.md
+++ b/service/eks/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.54.0 (2024-12-13)
+
+* **Feature**: Add NodeRepairConfig in CreateNodegroupRequest and UpdateNodegroupConfigRequest
+
# v1.53.0 (2024-12-02)
* **Feature**: Added support for Auto Mode Clusters, Hybrid Nodes, and specifying computeTypes in the DescribeAddonVersions API.
diff --git a/service/eks/api_op_CreateNodegroup.go b/service/eks/api_op_CreateNodegroup.go
index bd78f63fc06..c44efabb1b9 100644
--- a/service/eks/api_op_CreateNodegroup.go
+++ b/service/eks/api_op_CreateNodegroup.go
@@ -136,6 +136,9 @@ type CreateNodegroupInput struct {
// [Customizing managed nodes with launch templates]: https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html
LaunchTemplate *types.LaunchTemplateSpecification
+ // The node auto repair configuration for the node group.
+ NodeRepairConfig *types.NodeRepairConfig
+
// The AMI version of the Amazon EKS optimized AMI to use with your node group. By
// default, the latest available AMI version for the node group's current
// Kubernetes version is used. For information about Linux versions, see [Amazon EKS optimized Amazon Linux AMI versions]in the
diff --git a/service/eks/api_op_UpdateNodegroupConfig.go b/service/eks/api_op_UpdateNodegroupConfig.go
index c0913e98d8c..c287486c22a 100644
--- a/service/eks/api_op_UpdateNodegroupConfig.go
+++ b/service/eks/api_op_UpdateNodegroupConfig.go
@@ -50,6 +50,9 @@ type UpdateNodegroupConfigInput struct {
// The Kubernetes labels to apply to the nodes in the node group after the update.
Labels *types.UpdateLabelsPayload
+ // The node auto repair configuration for the node group.
+ NodeRepairConfig *types.NodeRepairConfig
+
// The scaling configuration details for the Auto Scaling group after the update.
ScalingConfig *types.NodegroupScalingConfig
diff --git a/service/eks/deserializers.go b/service/eks/deserializers.go
index a3663cd07ef..c88a3cf5c47 100644
--- a/service/eks/deserializers.go
+++ b/service/eks/deserializers.go
@@ -13827,6 +13827,11 @@ func awsRestjson1_deserializeDocumentNodegroup(v **types.Nodegroup, value interf
sv.NodegroupName = ptr.String(jtv)
}
+ case "nodeRepairConfig":
+ if err := awsRestjson1_deserializeDocumentNodeRepairConfig(&sv.NodeRepairConfig, value); err != nil {
+ return err
+ }
+
case "nodeRole":
if value != nil {
jtv, ok := value.(string)
@@ -14115,6 +14120,46 @@ func awsRestjson1_deserializeDocumentNodegroupUpdateConfig(v **types.NodegroupUp
return nil
}
+func awsRestjson1_deserializeDocumentNodeRepairConfig(v **types.NodeRepairConfig, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.NodeRepairConfig
+ if *v == nil {
+ sv = &types.NodeRepairConfig{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "enabled":
+ if value != nil {
+ jtv, ok := value.(bool)
+ if !ok {
+ return fmt.Errorf("expected BoxedBoolean to be of type *bool, got %T instead", value)
+ }
+ sv.Enabled = ptr.Bool(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsRestjson1_deserializeDocumentNotFoundException(v **types.NotFoundException, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
diff --git a/service/eks/go_module_metadata.go b/service/eks/go_module_metadata.go
index e46793693d0..d2f5cf55396 100644
--- a/service/eks/go_module_metadata.go
+++ b/service/eks/go_module_metadata.go
@@ -3,4 +3,4 @@
package eks
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.53.0"
+const goModuleVersion = "1.54.0"
diff --git a/service/eks/serializers.go b/service/eks/serializers.go
index a58b7383b0c..cfcf16c42a2 100644
--- a/service/eks/serializers.go
+++ b/service/eks/serializers.go
@@ -1147,6 +1147,13 @@ func awsRestjson1_serializeOpDocumentCreateNodegroupInput(v *CreateNodegroupInpu
ok.String(*v.NodegroupName)
}
+ if v.NodeRepairConfig != nil {
+ ok := object.Key("nodeRepairConfig")
+ if err := awsRestjson1_serializeDocumentNodeRepairConfig(v.NodeRepairConfig, ok); err != nil {
+ return err
+ }
+ }
+
if v.NodeRole != nil {
ok := object.Key("nodeRole")
ok.String(*v.NodeRole)
@@ -5135,6 +5142,13 @@ func awsRestjson1_serializeOpDocumentUpdateNodegroupConfigInput(v *UpdateNodegro
}
}
+ if v.NodeRepairConfig != nil {
+ ok := object.Key("nodeRepairConfig")
+ if err := awsRestjson1_serializeDocumentNodeRepairConfig(v.NodeRepairConfig, ok); err != nil {
+ return err
+ }
+ }
+
if v.ScalingConfig != nil {
ok := object.Key("scalingConfig")
if err := awsRestjson1_serializeDocumentNodegroupScalingConfig(v.ScalingConfig, ok); err != nil {
@@ -5860,6 +5874,18 @@ func awsRestjson1_serializeDocumentNodegroupUpdateConfig(v *types.NodegroupUpdat
return nil
}
+func awsRestjson1_serializeDocumentNodeRepairConfig(v *types.NodeRepairConfig, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.Enabled != nil {
+ ok := object.Key("enabled")
+ ok.Boolean(*v.Enabled)
+ }
+
+ return nil
+}
+
func awsRestjson1_serializeDocumentOidcIdentityProviderConfigRequest(v *types.OidcIdentityProviderConfigRequest, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
diff --git a/service/eks/types/enums.go b/service/eks/types/enums.go
index 79bc48c4803..83a3903a0a0 100644
--- a/service/eks/types/enums.go
+++ b/service/eks/types/enums.go
@@ -759,6 +759,7 @@ const (
UpdateParamTypeResolveConflicts UpdateParamType = "ResolveConflicts"
UpdateParamTypeMaxUnavailable UpdateParamType = "MaxUnavailable"
UpdateParamTypeMaxUnavailablePercentage UpdateParamType = "MaxUnavailablePercentage"
+ UpdateParamTypeNodeRepairEnabled UpdateParamType = "NodeRepairEnabled"
UpdateParamTypeConfigurationValues UpdateParamType = "ConfigurationValues"
UpdateParamTypeSecurityGroups UpdateParamType = "SecurityGroups"
UpdateParamTypeSubnets UpdateParamType = "Subnets"
@@ -800,6 +801,7 @@ func (UpdateParamType) Values() []UpdateParamType {
"ResolveConflicts",
"MaxUnavailable",
"MaxUnavailablePercentage",
+ "NodeRepairEnabled",
"ConfigurationValues",
"SecurityGroups",
"Subnets",
diff --git a/service/eks/types/types.go b/service/eks/types/types.go
index e70cd9a7622..b7ac14cb62b 100644
--- a/service/eks/types/types.go
+++ b/service/eks/types/types.go
@@ -1307,6 +1307,9 @@ type Nodegroup struct {
// The Unix epoch timestamp for the last modification to the object.
ModifiedAt *time.Time
+ // The node auto repair configuration for the node group.
+ NodeRepairConfig *NodeRepairConfig
+
// The IAM role associated with your node group. The Amazon EKS node kubelet
// daemon makes calls to Amazon Web Services APIs on your behalf. Nodes receive
// permissions for these API calls through an IAM instance profile and associated
@@ -1450,6 +1453,16 @@ type NodegroupUpdateConfig struct {
noSmithyDocumentSerde
}
+// The node auto repair configuration for the node group.
+type NodeRepairConfig struct {
+
+ // Specifies whether to enable node auto repair for the node group. Node auto
+ // repair is disabled by default.
+ Enabled *bool
+
+ noSmithyDocumentSerde
+}
+
// An object representing the [OpenID Connect] (OIDC) identity provider information for the
// cluster.
//
@@ -1790,10 +1803,56 @@ type RemoteAccessConfig struct {
type RemoteNetworkConfigRequest struct {
// The list of network CIDRs that can contain hybrid nodes.
+ //
+ // These CIDR blocks define the expected IP address range of the hybrid nodes that
+ // join the cluster. These blocks are typically determined by your network
+ // administrator.
+ //
+ // Enter one or more IPv4 CIDR blocks in decimal dotted-quad notation (for
+ // example, 10.2.0.0/16 ).
+ //
+ // It must satisfy the following requirements:
+ //
+ // - Each block must be within an IPv4 RFC-1918 network range. Minimum allowed
+ // size is /24, maximum allowed size is /8. Publicly-routable addresses aren't
+ // supported.
+ //
+ // - Each block cannot overlap with the range of the VPC CIDR blocks for your
+ // EKS resources, or the block of the Kubernetes service IP range.
+ //
+ // - Each block must have a route to the VPC that uses the VPC CIDR blocks, not
+ // public IPs or Elastic IPs. There are many options including Transit Gateway,
+ // Site-to-Site VPN, or Direct Connect.
+ //
+ // - Each host must allow outbound connection to the EKS cluster control plane
+ // on TCP ports 443 and 10250 .
+ //
+ // - Each host must allow inbound connection from the EKS cluster control plane
+ // on TCP port 10250 for logs, exec and port-forward operations.
+ //
+ // - Each host must allow TCP and UDP network connectivity to and from other
+ // hosts that are running CoreDNS on UDP port 53 for service and pod DNS names.
RemoteNodeNetworks []RemoteNodeNetwork
// The list of network CIDRs that can contain pods that run Kubernetes webhooks on
// hybrid nodes.
+ //
+ // These CIDR blocks are determined by configuring your Container Network
+ // Interface (CNI) plugin. We recommend the Calico CNI or Cilium CNI. Note that the
+ // Amazon VPC CNI plugin for Kubernetes isn't available for on-premises and edge
+ // locations.
+ //
+ // Enter one or more IPv4 CIDR blocks in decimal dotted-quad notation (for
+ // example, 10.2.0.0/16 ).
+ //
+ // It must satisfy the following requirements:
+ //
+ // - Each block must be within an IPv4 RFC-1918 network range. Minimum allowed
+ // size is /24, maximum allowed size is /8. Publicly-routable addresses aren't
+ // supported.
+ //
+ // - Each block cannot overlap with the range of the VPC CIDR blocks for your
+ // EKS resources, or the block of the Kubernetes service IP range.
RemotePodNetworks []RemotePodNetwork
noSmithyDocumentSerde
@@ -1814,9 +1873,67 @@ type RemoteNetworkConfigResponse struct {
}
// A network CIDR that can contain hybrid nodes.
+//
+// These CIDR blocks define the expected IP address range of the hybrid nodes that
+// join the cluster. These blocks are typically determined by your network
+// administrator.
+//
+// Enter one or more IPv4 CIDR blocks in decimal dotted-quad notation (for
+// example, 10.2.0.0/16 ).
+//
+// It must satisfy the following requirements:
+//
+// - Each block must be within an IPv4 RFC-1918 network range. Minimum allowed
+// size is /24, maximum allowed size is /8. Publicly-routable addresses aren't
+// supported.
+//
+// - Each block cannot overlap with the range of the VPC CIDR blocks for your
+// EKS resources, or the block of the Kubernetes service IP range.
+//
+// - Each block must have a route to the VPC that uses the VPC CIDR blocks, not
+// public IPs or Elastic IPs. There are many options including Transit Gateway,
+// Site-to-Site VPN, or Direct Connect.
+//
+// - Each host must allow outbound connection to the EKS cluster control plane
+// on TCP ports 443 and 10250 .
+//
+// - Each host must allow inbound connection from the EKS cluster control plane
+// on TCP port 10250 for logs, exec and port-forward operations.
+//
+// - Each host must allow TCP and UDP network connectivity to and from other
+// hosts that are running CoreDNS on UDP port 53 for service and pod DNS names.
type RemoteNodeNetwork struct {
// A network CIDR that can contain hybrid nodes.
+ //
+ // These CIDR blocks define the expected IP address range of the hybrid nodes that
+ // join the cluster. These blocks are typically determined by your network
+ // administrator.
+ //
+ // Enter one or more IPv4 CIDR blocks in decimal dotted-quad notation (for
+ // example, 10.2.0.0/16 ).
+ //
+ // It must satisfy the following requirements:
+ //
+ // - Each block must be within an IPv4 RFC-1918 network range. Minimum allowed
+ // size is /24, maximum allowed size is /8. Publicly-routable addresses aren't
+ // supported.
+ //
+ // - Each block cannot overlap with the range of the VPC CIDR blocks for your
+ // EKS resources, or the block of the Kubernetes service IP range.
+ //
+ // - Each block must have a route to the VPC that uses the VPC CIDR blocks, not
+ // public IPs or Elastic IPs. There are many options including Transit Gateway,
+ // Site-to-Site VPN, or Direct Connect.
+ //
+ // - Each host must allow outbound connection to the EKS cluster control plane
+ // on TCP ports 443 and 10250 .
+ //
+ // - Each host must allow inbound connection from the EKS cluster control plane
+ // on TCP port 10250 for logs, exec and port-forward operations.
+ //
+ // - Each host must allow TCP and UDP network connectivity to and from other
+ // hosts that are running CoreDNS on UDP port 53 for service and pod DNS names.
Cidrs []string
noSmithyDocumentSerde
@@ -1824,10 +1941,44 @@ type RemoteNodeNetwork struct {
// A network CIDR that can contain pods that run Kubernetes webhooks on hybrid
// nodes.
+//
+// These CIDR blocks are determined by configuring your Container Network
+// Interface (CNI) plugin. We recommend the Calico CNI or Cilium CNI. Note that the
+// Amazon VPC CNI plugin for Kubernetes isn't available for on-premises and edge
+// locations.
+//
+// Enter one or more IPv4 CIDR blocks in decimal dotted-quad notation (for
+// example, 10.2.0.0/16 ).
+//
+// It must satisfy the following requirements:
+//
+// - Each block must be within an IPv4 RFC-1918 network range. Minimum allowed
+// size is /24, maximum allowed size is /8. Publicly-routable addresses aren't
+// supported.
+//
+// - Each block cannot overlap with the range of the VPC CIDR blocks for your
+// EKS resources, or the block of the Kubernetes service IP range.
type RemotePodNetwork struct {
// A network CIDR that can contain pods that run Kubernetes webhooks on hybrid
// nodes.
+ //
+ // These CIDR blocks are determined by configuring your Container Network
+ // Interface (CNI) plugin. We recommend the Calico CNI or Cilium CNI. Note that the
+ // Amazon VPC CNI plugin for Kubernetes isn't available for on-premises and edge
+ // locations.
+ //
+ // Enter one or more IPv4 CIDR blocks in decimal dotted-quad notation (for
+ // example, 10.2.0.0/16 ).
+ //
+ // It must satisfy the following requirements:
+ //
+ // - Each block must be within an IPv4 RFC-1918 network range. Minimum allowed
+ // size is /24, maximum allowed size is /8. Publicly-routable addresses aren't
+ // supported.
+ //
+ // - Each block cannot overlap with the range of the VPC CIDR blocks for your
+ // EKS resources, or the block of the Kubernetes service IP range.
Cidrs []string
noSmithyDocumentSerde
diff --git a/service/glue/CHANGELOG.md b/service/glue/CHANGELOG.md
index 7d83905e6ec..5aa9beb4343 100644
--- a/service/glue/CHANGELOG.md
+++ b/service/glue/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.104.0 (2024-12-12)
+
+* **Feature**: To support customer-managed encryption in Data Quality to allow customers encrypt data with their own KMS key, we will add a DataQualityEncryption field to the SecurityConfiguration API where customers can provide their KMS keys.
+
# v1.103.0 (2024-12-03.2)
* **Feature**: This release includes(1)Zero-ETL integration to ingest data from 3P SaaS and DynamoDB to Redshift/Redlake (2)new properties on Connections to enable reuse; new connection APIs for retrieve/preview metadata (3)support of CRUD operations for Multi-catalog (4)support of automatic statistics collections
diff --git a/service/glue/api_op_CreateJob.go b/service/glue/api_op_CreateJob.go
index 691e40e0c48..f7e1eaafb38 100644
--- a/service/glue/api_op_CreateJob.go
+++ b/service/glue/api_op_CreateJob.go
@@ -221,41 +221,39 @@ type CreateJobInput struct {
// for Ray jobs.
//
// - For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of
- // memory) with 84GB disk (approximately 34GB free), and provides 1 executor per
- // worker. We recommend this worker type for workloads such as data transforms,
- // joins, and queries, to offers a scalable and cost effective way to run most
- // jobs.
+ // memory) with 94GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for workloads such as data transforms, joins, and queries, to offers
+ // a scalable and cost effective way to run most jobs.
//
// - For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of
- // memory) with 128GB disk (approximately 77GB free), and provides 1 executor per
- // worker. We recommend this worker type for workloads such as data transforms,
- // joins, and queries, to offers a scalable and cost effective way to run most
- // jobs.
+ // memory) with 138GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for workloads such as data transforms, joins, and queries, to offers
+ // a scalable and cost effective way to run most jobs.
//
// - For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of
- // memory) with 256GB disk (approximately 235GB free), and provides 1 executor per
- // worker. We recommend this worker type for jobs whose workloads contain your most
- // demanding transforms, aggregations, joins, and queries. This worker type is
- // available only for Glue version 3.0 or later Spark ETL jobs in the following
- // Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West
- // (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo),
- // Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
+ // memory) with 256GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for jobs whose workloads contain your most demanding transforms,
+ // aggregations, joins, and queries. This worker type is available only for Glue
+ // version 3.0 or later Spark ETL jobs in the following Amazon Web Services
+ // Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific
+ // (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central),
+ // Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
//
// - For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of
- // memory) with 512GB disk (approximately 487GB free), and provides 1 executor per
- // worker. We recommend this worker type for jobs whose workloads contain your most
- // demanding transforms, aggregations, joins, and queries. This worker type is
- // available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon
- // Web Services Regions as supported for the G.4X worker type.
+ // memory) with 512GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for jobs whose workloads contain your most demanding transforms,
+ // aggregations, joins, and queries. This worker type is available only for Glue
+ // version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as
+ // supported for the G.4X worker type.
//
// - For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of
- // memory) with 84GB disk (approximately 34GB free), and provides 1 executor per
- // worker. We recommend this worker type for low volume streaming jobs. This worker
- // type is only available for Glue version 3.0 streaming jobs.
+ // memory) with 84GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for low volume streaming jobs. This worker type is only available
+ // for Glue version 3.0 or later streaming jobs.
//
// - For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of
- // memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray
- // workers based on the autoscaler.
+ // memory) with 128 GB disk, and provides up to 8 Ray workers based on the
+ // autoscaler.
WorkerType types.WorkerType
noSmithyDocumentSerde
diff --git a/service/glue/api_op_CreateSession.go b/service/glue/api_op_CreateSession.go
index 761826f3670..4299b5ad7dc 100644
--- a/service/glue/api_op_CreateSession.go
+++ b/service/glue/api_op_CreateSession.go
@@ -89,36 +89,34 @@ type CreateSessionInput struct {
// Ray notebooks.
//
// - For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of
- // memory) with 84GB disk (approximately 34GB free), and provides 1 executor per
- // worker. We recommend this worker type for workloads such as data transforms,
- // joins, and queries, to offers a scalable and cost effective way to run most
- // jobs.
+ // memory) with 94GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for workloads such as data transforms, joins, and queries, to offers
+ // a scalable and cost effective way to run most jobs.
//
// - For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of
- // memory) with 128GB disk (approximately 77GB free), and provides 1 executor per
- // worker. We recommend this worker type for workloads such as data transforms,
- // joins, and queries, to offers a scalable and cost effective way to run most
- // jobs.
+ // memory) with 138GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for workloads such as data transforms, joins, and queries, to offers
+ // a scalable and cost effective way to run most jobs.
//
// - For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of
- // memory) with 256GB disk (approximately 235GB free), and provides 1 executor per
- // worker. We recommend this worker type for jobs whose workloads contain your most
- // demanding transforms, aggregations, joins, and queries. This worker type is
- // available only for Glue version 3.0 or later Spark ETL jobs in the following
- // Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West
- // (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo),
- // Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
+ // memory) with 256GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for jobs whose workloads contain your most demanding transforms,
+ // aggregations, joins, and queries. This worker type is available only for Glue
+ // version 3.0 or later Spark ETL jobs in the following Amazon Web Services
+ // Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific
+ // (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central),
+ // Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
//
// - For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of
- // memory) with 512GB disk (approximately 487GB free), and provides 1 executor per
- // worker. We recommend this worker type for jobs whose workloads contain your most
- // demanding transforms, aggregations, joins, and queries. This worker type is
- // available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon
- // Web Services Regions as supported for the G.4X worker type.
+ // memory) with 512GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for jobs whose workloads contain your most demanding transforms,
+ // aggregations, joins, and queries. This worker type is available only for Glue
+ // version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as
+ // supported for the G.4X worker type.
//
// - For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of
- // memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray
- // workers based on the autoscaler.
+ // memory) with 128 GB disk, and provides up to 8 Ray workers based on the
+ // autoscaler.
WorkerType types.WorkerType
noSmithyDocumentSerde
diff --git a/service/glue/api_op_CreateTrigger.go b/service/glue/api_op_CreateTrigger.go
index bbdd4abd900..ad99001e5b4 100644
--- a/service/glue/api_op_CreateTrigger.go
+++ b/service/glue/api_op_CreateTrigger.go
@@ -12,6 +12,10 @@ import (
)
// Creates a new trigger.
+//
+// Job arguments may be logged. Do not pass plaintext secrets as arguments.
+// Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or
+// other secret management mechanism if you intend to keep them within the Job.
func (c *Client) CreateTrigger(ctx context.Context, params *CreateTriggerInput, optFns ...func(*Options)) (*CreateTriggerOutput, error) {
if params == nil {
params = &CreateTriggerInput{}
diff --git a/service/glue/api_op_CreateWorkflow.go b/service/glue/api_op_CreateWorkflow.go
index 1c81c3915a6..0522a14506b 100644
--- a/service/glue/api_op_CreateWorkflow.go
+++ b/service/glue/api_op_CreateWorkflow.go
@@ -35,6 +35,11 @@ type CreateWorkflowInput struct {
Name *string
// A collection of properties to be used as part of each execution of the workflow.
+ //
+ // Run properties may be logged. Do not pass plaintext secrets as properties.
+ // Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or
+ // other secret management mechanism if you intend to use them within the workflow
+ // run.
DefaultRunProperties map[string]string
// A description of the workflow.
diff --git a/service/glue/api_op_GetJobRun.go b/service/glue/api_op_GetJobRun.go
index 9c178bf2823..6127824a6e5 100644
--- a/service/glue/api_op_GetJobRun.go
+++ b/service/glue/api_op_GetJobRun.go
@@ -12,7 +12,7 @@ import (
)
// Retrieves the metadata for a given job run. Job run history is accessible for
-// 90 days for your workflow and job run.
+// 365 days for your workflow and job run.
func (c *Client) GetJobRun(ctx context.Context, params *GetJobRunInput, optFns ...func(*Options)) (*GetJobRunOutput, error) {
if params == nil {
params = &GetJobRunInput{}
diff --git a/service/glue/api_op_GetJobRuns.go b/service/glue/api_op_GetJobRuns.go
index 1215267b6a4..d07da60430f 100644
--- a/service/glue/api_op_GetJobRuns.go
+++ b/service/glue/api_op_GetJobRuns.go
@@ -12,6 +12,9 @@ import (
)
// Retrieves metadata for all runs of a given job definition.
+//
+// GetJobRuns returns the job runs in chronological order, with the newest jobs
+// returned first.
func (c *Client) GetJobRuns(ctx context.Context, params *GetJobRunsInput, optFns ...func(*Options)) (*GetJobRunsOutput, error) {
if params == nil {
params = &GetJobRunsInput{}
diff --git a/service/glue/api_op_PutWorkflowRunProperties.go b/service/glue/api_op_PutWorkflowRunProperties.go
index a0ae5df1d60..cfc45576580 100644
--- a/service/glue/api_op_PutWorkflowRunProperties.go
+++ b/service/glue/api_op_PutWorkflowRunProperties.go
@@ -42,6 +42,11 @@ type PutWorkflowRunPropertiesInput struct {
// The properties to put for the specified run.
//
+ // Run properties may be logged. Do not pass plaintext secrets as properties.
+ // Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or
+ // other secret management mechanism if you intend to use them within the workflow
+ // run.
+ //
// This member is required.
RunProperties map[string]string
diff --git a/service/glue/api_op_StartJobRun.go b/service/glue/api_op_StartJobRun.go
index f72d15af95c..eb1257e2d3b 100644
--- a/service/glue/api_op_StartJobRun.go
+++ b/service/glue/api_op_StartJobRun.go
@@ -141,41 +141,39 @@ type StartJobRunInput struct {
// for Ray jobs.
//
// - For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of
- // memory) with 84GB disk (approximately 34GB free), and provides 1 executor per
- // worker. We recommend this worker type for workloads such as data transforms,
- // joins, and queries, to offers a scalable and cost effective way to run most
- // jobs.
+ // memory) with 94GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for workloads such as data transforms, joins, and queries, to offers
+ // a scalable and cost effective way to run most jobs.
//
// - For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of
- // memory) with 128GB disk (approximately 77GB free), and provides 1 executor per
- // worker. We recommend this worker type for workloads such as data transforms,
- // joins, and queries, to offers a scalable and cost effective way to run most
- // jobs.
+ // memory) with 138GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for workloads such as data transforms, joins, and queries, to offers
+ // a scalable and cost effective way to run most jobs.
//
// - For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of
- // memory) with 256GB disk (approximately 235GB free), and provides 1 executor per
- // worker. We recommend this worker type for jobs whose workloads contain your most
- // demanding transforms, aggregations, joins, and queries. This worker type is
- // available only for Glue version 3.0 or later Spark ETL jobs in the following
- // Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West
- // (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo),
- // Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
+ // memory) with 256GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for jobs whose workloads contain your most demanding transforms,
+ // aggregations, joins, and queries. This worker type is available only for Glue
+ // version 3.0 or later Spark ETL jobs in the following Amazon Web Services
+ // Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific
+ // (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central),
+ // Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
//
// - For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of
- // memory) with 512GB disk (approximately 487GB free), and provides 1 executor per
- // worker. We recommend this worker type for jobs whose workloads contain your most
- // demanding transforms, aggregations, joins, and queries. This worker type is
- // available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon
- // Web Services Regions as supported for the G.4X worker type.
+ // memory) with 512GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for jobs whose workloads contain your most demanding transforms,
+ // aggregations, joins, and queries. This worker type is available only for Glue
+ // version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as
+ // supported for the G.4X worker type.
//
// - For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of
- // memory) with 84GB disk (approximately 34GB free), and provides 1 executor per
- // worker. We recommend this worker type for low volume streaming jobs. This worker
- // type is only available for Glue version 3.0 streaming jobs.
+ // memory) with 84GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for low volume streaming jobs. This worker type is only available
+ // for Glue version 3.0 or later streaming jobs.
//
// - For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of
- // memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray
- // workers based on the autoscaler.
+ // memory) with 128 GB disk, and provides up to 8 Ray workers based on the
+ // autoscaler.
WorkerType types.WorkerType
noSmithyDocumentSerde
diff --git a/service/glue/api_op_StartWorkflowRun.go b/service/glue/api_op_StartWorkflowRun.go
index df59cfd73ea..513ecb6c2d1 100644
--- a/service/glue/api_op_StartWorkflowRun.go
+++ b/service/glue/api_op_StartWorkflowRun.go
@@ -34,6 +34,11 @@ type StartWorkflowRunInput struct {
Name *string
// The workflow run properties for the new workflow run.
+ //
+ // Run properties may be logged. Do not pass plaintext secrets as properties.
+ // Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or
+ // other secret management mechanism if you intend to use them within the workflow
+ // run.
RunProperties map[string]string
noSmithyDocumentSerde
diff --git a/service/glue/api_op_UpdateTrigger.go b/service/glue/api_op_UpdateTrigger.go
index dd0167774bf..5ab21172316 100644
--- a/service/glue/api_op_UpdateTrigger.go
+++ b/service/glue/api_op_UpdateTrigger.go
@@ -12,6 +12,10 @@ import (
)
// Updates a trigger definition.
+//
+// Job arguments may be logged. Do not pass plaintext secrets as arguments.
+// Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or
+// other secret management mechanism if you intend to keep them within the Job.
func (c *Client) UpdateTrigger(ctx context.Context, params *UpdateTriggerInput, optFns ...func(*Options)) (*UpdateTriggerOutput, error) {
if params == nil {
params = &UpdateTriggerInput{}
diff --git a/service/glue/api_op_UpdateWorkflow.go b/service/glue/api_op_UpdateWorkflow.go
index 27fdee77767..f942a725a7e 100644
--- a/service/glue/api_op_UpdateWorkflow.go
+++ b/service/glue/api_op_UpdateWorkflow.go
@@ -34,6 +34,11 @@ type UpdateWorkflowInput struct {
Name *string
// A collection of properties to be used as part of each execution of the workflow.
+ //
+ // Run properties may be logged. Do not pass plaintext secrets as properties.
+ // Retrieve secrets from a Glue Connection, Amazon Web Services Secrets Manager or
+ // other secret management mechanism if you intend to use them within the workflow
+ // run.
DefaultRunProperties map[string]string
// The description of the workflow.
diff --git a/service/glue/deserializers.go b/service/glue/deserializers.go
index e093a7138d8..8e50e7e8ff4 100644
--- a/service/glue/deserializers.go
+++ b/service/glue/deserializers.go
@@ -41728,6 +41728,55 @@ func awsAwsjson11_deserializeDocumentDataQualityAnalyzerResults(v *[]types.DataQ
return nil
}
+func awsAwsjson11_deserializeDocumentDataQualityEncryption(v **types.DataQualityEncryption, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.DataQualityEncryption
+ if *v == nil {
+ sv = &types.DataQualityEncryption{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "DataQualityEncryptionMode":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected DataQualityEncryptionMode to be of type string, got %T instead", value)
+ }
+ sv.DataQualityEncryptionMode = types.DataQualityEncryptionMode(jtv)
+ }
+
+ case "KmsKeyArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected KmsKeyArn to be of type string, got %T instead", value)
+ }
+ sv.KmsKeyArn = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsAwsjson11_deserializeDocumentDataQualityEvaluationRunAdditionalRunOptions(v **types.DataQualityEvaluationRunAdditionalRunOptions, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -45057,6 +45106,11 @@ func awsAwsjson11_deserializeDocumentEncryptionConfiguration(v **types.Encryptio
return err
}
+ case "DataQualityEncryption":
+ if err := awsAwsjson11_deserializeDocumentDataQualityEncryption(&sv.DataQualityEncryption, value); err != nil {
+ return err
+ }
+
case "JobBookmarksEncryption":
if err := awsAwsjson11_deserializeDocumentJobBookmarksEncryption(&sv.JobBookmarksEncryption, value); err != nil {
return err
diff --git a/service/glue/go_module_metadata.go b/service/glue/go_module_metadata.go
index 05aaf970aa4..7b543085722 100644
--- a/service/glue/go_module_metadata.go
+++ b/service/glue/go_module_metadata.go
@@ -3,4 +3,4 @@
package glue
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.103.0"
+const goModuleVersion = "1.104.0"
diff --git a/service/glue/serializers.go b/service/glue/serializers.go
index c524263472f..c5759eda79f 100644
--- a/service/glue/serializers.go
+++ b/service/glue/serializers.go
@@ -18286,6 +18286,23 @@ func awsAwsjson11_serializeDocumentDatapointInclusionAnnotation(v *types.Datapoi
return nil
}
+func awsAwsjson11_serializeDocumentDataQualityEncryption(v *types.DataQualityEncryption, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if len(v.DataQualityEncryptionMode) > 0 {
+ ok := object.Key("DataQualityEncryptionMode")
+ ok.String(string(v.DataQualityEncryptionMode))
+ }
+
+ if v.KmsKeyArn != nil {
+ ok := object.Key("KmsKeyArn")
+ ok.String(*v.KmsKeyArn)
+ }
+
+ return nil
+}
+
func awsAwsjson11_serializeDocumentDataQualityEvaluationRunAdditionalRunOptions(v *types.DataQualityEvaluationRunAdditionalRunOptions, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
@@ -19192,6 +19209,13 @@ func awsAwsjson11_serializeDocumentEncryptionConfiguration(v *types.EncryptionCo
}
}
+ if v.DataQualityEncryption != nil {
+ ok := object.Key("DataQualityEncryption")
+ if err := awsAwsjson11_serializeDocumentDataQualityEncryption(v.DataQualityEncryption, ok); err != nil {
+ return err
+ }
+ }
+
if v.JobBookmarksEncryption != nil {
ok := object.Key("JobBookmarksEncryption")
if err := awsAwsjson11_serializeDocumentJobBookmarksEncryption(v.JobBookmarksEncryption, ok); err != nil {
diff --git a/service/glue/types/enums.go b/service/glue/types/enums.go
index 5b08eea0b9c..de569962afa 100644
--- a/service/glue/types/enums.go
+++ b/service/glue/types/enums.go
@@ -765,6 +765,25 @@ func (DataOperation) Values() []DataOperation {
}
}
+type DataQualityEncryptionMode string
+
+// Enum values for DataQualityEncryptionMode
+const (
+ DataQualityEncryptionModeDisabled DataQualityEncryptionMode = "DISABLED"
+ DataQualityEncryptionModeSsekms DataQualityEncryptionMode = "SSE-KMS"
+)
+
+// Values returns all known values for DataQualityEncryptionMode. Note that this
+// can be expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (DataQualityEncryptionMode) Values() []DataQualityEncryptionMode {
+ return []DataQualityEncryptionMode{
+ "DISABLED",
+ "SSE-KMS",
+ }
+}
+
type DataQualityModelStatus string
// Enum values for DataQualityModelStatus
diff --git a/service/glue/types/types.go b/service/glue/types/types.go
index 4320590b241..6b985a0bd66 100644
--- a/service/glue/types/types.go
+++ b/service/glue/types/types.go
@@ -3079,6 +3079,23 @@ type DataQualityAnalyzerResult struct {
noSmithyDocumentSerde
}
+// Specifies how Data Quality assets in your account should be encrypted.
+type DataQualityEncryption struct {
+
+ // The encryption mode to use for encrypting Data Quality assets. These assets
+ // include data quality rulesets, results, statistics, anomaly detection models and
+ // observations.
+ //
+ // Valid values are SSEKMS for encryption using a customer-managed KMS key, or
+ // DISABLED .
+ DataQualityEncryptionMode DataQualityEncryptionMode
+
+ // The Amazon Resource Name (ARN) of the KMS key to be used to encrypt the data.
+ KmsKeyArn *string
+
+ noSmithyDocumentSerde
+}
+
// Additional run options you can specify for an evaluation run.
type DataQualityEvaluationRunAdditionalRunOptions struct {
@@ -4018,6 +4035,9 @@ type EncryptionConfiguration struct {
// The encryption configuration for Amazon CloudWatch.
CloudWatchEncryption *CloudWatchEncryption
+ // The encryption configuration for Glue Data Quality assets.
+ DataQualityEncryption *DataQualityEncryption
+
// The encryption configuration for job bookmarks.
JobBookmarksEncryption *JobBookmarksEncryption
@@ -5392,41 +5412,39 @@ type Job struct {
// for Ray jobs.
//
// - For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of
- // memory) with 84GB disk (approximately 34GB free), and provides 1 executor per
- // worker. We recommend this worker type for workloads such as data transforms,
- // joins, and queries, to offers a scalable and cost effective way to run most
- // jobs.
+ // memory) with 94GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for workloads such as data transforms, joins, and queries, to offers
+ // a scalable and cost effective way to run most jobs.
//
// - For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of
- // memory) with 128GB disk (approximately 77GB free), and provides 1 executor per
- // worker. We recommend this worker type for workloads such as data transforms,
- // joins, and queries, to offers a scalable and cost effective way to run most
- // jobs.
+ // memory) with 138GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for workloads such as data transforms, joins, and queries, to offers
+ // a scalable and cost effective way to run most jobs.
//
// - For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of
- // memory) with 256GB disk (approximately 235GB free), and provides 1 executor per
- // worker. We recommend this worker type for jobs whose workloads contain your most
- // demanding transforms, aggregations, joins, and queries. This worker type is
- // available only for Glue version 3.0 or later Spark ETL jobs in the following
- // Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West
- // (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo),
- // Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
+ // memory) with 256GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for jobs whose workloads contain your most demanding transforms,
+ // aggregations, joins, and queries. This worker type is available only for Glue
+ // version 3.0 or later Spark ETL jobs in the following Amazon Web Services
+ // Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific
+ // (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central),
+ // Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
//
// - For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of
- // memory) with 512GB disk (approximately 487GB free), and provides 1 executor per
- // worker. We recommend this worker type for jobs whose workloads contain your most
- // demanding transforms, aggregations, joins, and queries. This worker type is
- // available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon
- // Web Services Regions as supported for the G.4X worker type.
+ // memory) with 512GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for jobs whose workloads contain your most demanding transforms,
+ // aggregations, joins, and queries. This worker type is available only for Glue
+ // version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as
+ // supported for the G.4X worker type.
//
// - For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of
- // memory) with 84GB disk (approximately 34GB free), and provides 1 executor per
- // worker. We recommend this worker type for low volume streaming jobs. This worker
- // type is only available for Glue version 3.0 streaming jobs.
+ // memory) with 84GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for low volume streaming jobs. This worker type is only available
+ // for Glue version 3.0 or later streaming jobs.
//
// - For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of
- // memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray
- // workers based on the autoscaler.
+ // memory) with 128 GB disk, and provides up to 8 Ray workers based on the
+ // autoscaler.
WorkerType WorkerType
noSmithyDocumentSerde
@@ -5719,41 +5737,39 @@ type JobRun struct {
// for Ray jobs.
//
// - For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of
- // memory) with 84GB disk (approximately 34GB free), and provides 1 executor per
- // worker. We recommend this worker type for workloads such as data transforms,
- // joins, and queries, to offers a scalable and cost effective way to run most
- // jobs.
+ // memory) with 94GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for workloads such as data transforms, joins, and queries, to offers
+ // a scalable and cost effective way to run most jobs.
//
// - For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of
- // memory) with 128GB disk (approximately 77GB free), and provides 1 executor per
- // worker. We recommend this worker type for workloads such as data transforms,
- // joins, and queries, to offers a scalable and cost effective way to run most
- // jobs.
+ // memory) with 138GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for workloads such as data transforms, joins, and queries, to offers
+ // a scalable and cost effective way to run most jobs.
//
// - For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of
- // memory) with 256GB disk (approximately 235GB free), and provides 1 executor per
- // worker. We recommend this worker type for jobs whose workloads contain your most
- // demanding transforms, aggregations, joins, and queries. This worker type is
- // available only for Glue version 3.0 or later Spark ETL jobs in the following
- // Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West
- // (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo),
- // Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
+ // memory) with 256GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for jobs whose workloads contain your most demanding transforms,
+ // aggregations, joins, and queries. This worker type is available only for Glue
+ // version 3.0 or later Spark ETL jobs in the following Amazon Web Services
+ // Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific
+ // (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central),
+ // Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
//
// - For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of
- // memory) with 512GB disk (approximately 487GB free), and provides 1 executor per
- // worker. We recommend this worker type for jobs whose workloads contain your most
- // demanding transforms, aggregations, joins, and queries. This worker type is
- // available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon
- // Web Services Regions as supported for the G.4X worker type.
+ // memory) with 512GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for jobs whose workloads contain your most demanding transforms,
+ // aggregations, joins, and queries. This worker type is available only for Glue
+ // version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as
+ // supported for the G.4X worker type.
//
// - For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of
- // memory) with 84GB disk (approximately 34GB free), and provides 1 executor per
- // worker. We recommend this worker type for low volume streaming jobs. This worker
- // type is only available for Glue version 3.0 streaming jobs.
+ // memory) with 84GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for low volume streaming jobs. This worker type is only available
+ // for Glue version 3.0 or later streaming jobs.
//
// - For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of
- // memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray
- // workers based on the autoscaler.
+ // memory) with 128 GB disk, and provides up to 8 Ray workers based on the
+ // autoscaler.
WorkerType WorkerType
noSmithyDocumentSerde
@@ -5941,41 +5957,39 @@ type JobUpdate struct {
// for Ray jobs.
//
// - For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of
- // memory) with 84GB disk (approximately 34GB free), and provides 1 executor per
- // worker. We recommend this worker type for workloads such as data transforms,
- // joins, and queries, to offers a scalable and cost effective way to run most
- // jobs.
+ // memory) with 94GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for workloads such as data transforms, joins, and queries, to offers
+ // a scalable and cost effective way to run most jobs.
//
// - For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of
- // memory) with 128GB disk (approximately 77GB free), and provides 1 executor per
- // worker. We recommend this worker type for workloads such as data transforms,
- // joins, and queries, to offers a scalable and cost effective way to run most
- // jobs.
+ // memory) with 138GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for workloads such as data transforms, joins, and queries, to offers
+ // a scalable and cost effective way to run most jobs.
//
// - For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of
- // memory) with 256GB disk (approximately 235GB free), and provides 1 executor per
- // worker. We recommend this worker type for jobs whose workloads contain your most
- // demanding transforms, aggregations, joins, and queries. This worker type is
- // available only for Glue version 3.0 or later Spark ETL jobs in the following
- // Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West
- // (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo),
- // Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
+ // memory) with 256GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for jobs whose workloads contain your most demanding transforms,
+ // aggregations, joins, and queries. This worker type is available only for Glue
+ // version 3.0 or later Spark ETL jobs in the following Amazon Web Services
+ // Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific
+ // (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central),
+ // Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
//
// - For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of
- // memory) with 512GB disk (approximately 487GB free), and provides 1 executor per
- // worker. We recommend this worker type for jobs whose workloads contain your most
- // demanding transforms, aggregations, joins, and queries. This worker type is
- // available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon
- // Web Services Regions as supported for the G.4X worker type.
+ // memory) with 512GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for jobs whose workloads contain your most demanding transforms,
+ // aggregations, joins, and queries. This worker type is available only for Glue
+ // version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as
+ // supported for the G.4X worker type.
//
// - For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of
- // memory) with 84GB disk (approximately 34GB free), and provides 1 executor per
- // worker. We recommend this worker type for low volume streaming jobs. This worker
- // type is only available for Glue version 3.0 streaming jobs.
+ // memory) with 84GB disk, and provides 1 executor per worker. We recommend this
+ // worker type for low volume streaming jobs. This worker type is only available
+ // for Glue version 3.0 or later streaming jobs.
//
// - For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of
- // memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray
- // workers based on the autoscaler.
+ // memory) with 128 GB disk, and provides up to 8 Ray workers based on the
+ // autoscaler.
WorkerType WorkerType
noSmithyDocumentSerde
diff --git a/service/greengrassv2/CHANGELOG.md b/service/greengrassv2/CHANGELOG.md
index ad056821c4c..1813151fcf4 100644
--- a/service/greengrassv2/CHANGELOG.md
+++ b/service/greengrassv2/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.36.0 (2024-12-16)
+
+* **Feature**: Add support for runtime in GetCoreDevice and ListCoreDevices APIs.
+
# v1.35.7 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/greengrassv2/api_op_GetCoreDevice.go b/service/greengrassv2/api_op_GetCoreDevice.go
index 644395c91bf..a6192f256e2 100644
--- a/service/greengrassv2/api_op_GetCoreDevice.go
+++ b/service/greengrassv2/api_op_GetCoreDevice.go
@@ -83,6 +83,13 @@ type GetCoreDeviceOutput struct {
// The operating system platform that the core device runs.
Platform *string
+ // The runtime for the core device. The runtime can be:
+ //
+ // - aws_nucleus_classic
+ //
+ // - aws_nucleus_lite
+ Runtime *string
+
// The status of the core device. The core device status can be:
//
// - HEALTHY – The IoT Greengrass Core software and all components run on the
diff --git a/service/greengrassv2/api_op_ListCoreDevices.go b/service/greengrassv2/api_op_ListCoreDevices.go
index 5bde0592801..6938ad9cc47 100644
--- a/service/greengrassv2/api_op_ListCoreDevices.go
+++ b/service/greengrassv2/api_op_ListCoreDevices.go
@@ -26,7 +26,13 @@ import (
// - When the core device receives a deployment from the Amazon Web Services
// Cloud
//
-// - When the status of any component on the core device becomes BROKEN
+// - For Greengrass nucleus 2.12.2 and earlier, the core device sends status
+// updates when the status of any component on the core device becomes ERRORED or
+// BROKEN .
+//
+// - For Greengrass nucleus 2.12.3 and later, the core device sends status
+// updates when the status of any component on the core device becomes ERRORED ,
+// BROKEN , RUNNING , or FINISHED .
//
// - At a [regular interval that you can configure], which defaults to 24 hours
//
@@ -57,6 +63,13 @@ type ListCoreDevicesInput struct {
// The token to be used for the next set of paginated results.
NextToken *string
+ // The runtime to be used by the core device. The runtime can be:
+ //
+ // - aws_nucleus_classic
+ //
+ // - aws_nucleus_lite
+ Runtime *string
+
// The core device status by which to filter. If you specify this parameter, the
// list includes only core devices that have this status. Choose one of the
// following options:
diff --git a/service/greengrassv2/deserializers.go b/service/greengrassv2/deserializers.go
index 6fb2492ff3d..15efc7a2cfa 100644
--- a/service/greengrassv2/deserializers.go
+++ b/service/greengrassv2/deserializers.go
@@ -2534,6 +2534,15 @@ func awsRestjson1_deserializeOpDocumentGetCoreDeviceOutput(v **GetCoreDeviceOutp
sv.Platform = ptr.String(jtv)
}
+ case "runtime":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected CoreDeviceRuntimeString to be of type string, got %T instead", value)
+ }
+ sv.Runtime = ptr.String(jtv)
+ }
+
case "status":
if value != nil {
jtv, ok := value.(string)
@@ -6258,6 +6267,15 @@ func awsRestjson1_deserializeDocumentCoreDevice(v **types.CoreDevice, value inte
for key, value := range shape {
switch key {
+ case "architecture":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected CoreDeviceArchitectureString to be of type string, got %T instead", value)
+ }
+ sv.Architecture = ptr.String(jtv)
+ }
+
case "coreDeviceThingName":
if value != nil {
jtv, ok := value.(string)
@@ -6283,6 +6301,24 @@ func awsRestjson1_deserializeDocumentCoreDevice(v **types.CoreDevice, value inte
}
}
+ case "platform":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected CoreDevicePlatformString to be of type string, got %T instead", value)
+ }
+ sv.Platform = ptr.String(jtv)
+ }
+
+ case "runtime":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected CoreDeviceRuntimeString to be of type string, got %T instead", value)
+ }
+ sv.Runtime = ptr.String(jtv)
+ }
+
case "status":
if value != nil {
jtv, ok := value.(string)
diff --git a/service/greengrassv2/go_module_metadata.go b/service/greengrassv2/go_module_metadata.go
index 32c9521c567..f3d4748bc0a 100644
--- a/service/greengrassv2/go_module_metadata.go
+++ b/service/greengrassv2/go_module_metadata.go
@@ -3,4 +3,4 @@
package greengrassv2
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.35.7"
+const goModuleVersion = "1.36.0"
diff --git a/service/greengrassv2/serializers.go b/service/greengrassv2/serializers.go
index b72c1aab523..dd94864cf7b 100644
--- a/service/greengrassv2/serializers.go
+++ b/service/greengrassv2/serializers.go
@@ -1660,6 +1660,10 @@ func awsRestjson1_serializeOpHttpBindingsListCoreDevicesInput(v *ListCoreDevices
encoder.SetQuery("nextToken").String(*v.NextToken)
}
+ if v.Runtime != nil {
+ encoder.SetQuery("runtime").String(*v.Runtime)
+ }
+
if len(v.Status) > 0 {
encoder.SetQuery("status").String(string(v.Status))
}
diff --git a/service/greengrassv2/types/types.go b/service/greengrassv2/types/types.go
index be15abdaab2..838946ff18f 100644
--- a/service/greengrassv2/types/types.go
+++ b/service/greengrassv2/types/types.go
@@ -348,6 +348,9 @@ type ConnectivityInfo struct {
// runs the IoT Greengrass Core software.
type CoreDevice struct {
+ // The computer architecture of the core device.
+ Architecture *string
+
// The name of the core device. This is also the name of the IoT thing.
CoreDeviceThingName *string
@@ -355,6 +358,16 @@ type CoreDevice struct {
// format.
LastStatusUpdateTimestamp *time.Time
+ // The operating system platform that the core device runs.
+ Platform *string
+
+ // The runtime for the core device. The runtime can be:
+ //
+ // - aws_nucleus_classic
+ //
+ // - aws_nucleus_lite
+ Runtime *string
+
// The status of the core device. Core devices can have the following statuses:
//
// - HEALTHY – The IoT Greengrass Core software and all components run on the
diff --git a/service/guardduty/CHANGELOG.md b/service/guardduty/CHANGELOG.md
index ef96b63f603..2161d8e62df 100644
--- a/service/guardduty/CHANGELOG.md
+++ b/service/guardduty/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.52.1 (2024-12-12)
+
+* **Documentation**: Improved descriptions for certain APIs.
+
# v1.52.0 (2024-12-02)
* **Feature**: Add new Multi Domain Correlation findings.
diff --git a/service/guardduty/api_op_CreateFilter.go b/service/guardduty/api_op_CreateFilter.go
index 1faf41d9e7f..75ecaa09e37 100644
--- a/service/guardduty/api_op_CreateFilter.go
+++ b/service/guardduty/api_op_CreateFilter.go
@@ -63,9 +63,11 @@ type CreateFilterInput struct {
//
// - Medium: ["4", "5", "6"]
//
- // - High: ["7", "8", "9"]
+ // - High: ["7", "8"]
//
- // For more information, see [Severity levels for GuardDuty findings].
+ // - Critical: ["9", "10"]
+ //
+ // For more information, see [Findings severity levels]in the Amazon GuardDuty User Guide.
//
// - type
//
@@ -256,8 +258,8 @@ type CreateFilterInput struct {
//
// - resource.lambdaDetails.tags.value
//
+ // [Findings severity levels]: https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings-severity.html
// [FindingCriteria]: https://docs.aws.amazon.com/guardduty/latest/APIReference/API_FindingCriteria.html
- // [Severity levels for GuardDuty findings]: https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings.html#guardduty_findings-severity
//
// This member is required.
FindingCriteria *types.FindingCriteria
diff --git a/service/guardduty/api_op_DescribeMalwareScans.go b/service/guardduty/api_op_DescribeMalwareScans.go
index 644c095a1b1..a391c28a39f 100644
--- a/service/guardduty/api_op_DescribeMalwareScans.go
+++ b/service/guardduty/api_op_DescribeMalwareScans.go
@@ -71,7 +71,8 @@ type DescribeMalwareScansInput struct {
type DescribeMalwareScansOutput struct {
- // Contains information about malware scans.
+ // Contains information about malware scans associated with GuardDuty Malware
+ // Protection for EC2.
//
// This member is required.
Scans []types.Scan
diff --git a/service/guardduty/api_op_UpdateOrganizationConfiguration.go b/service/guardduty/api_op_UpdateOrganizationConfiguration.go
index f20f645aaa1..62c56d12e15 100644
--- a/service/guardduty/api_op_UpdateOrganizationConfiguration.go
+++ b/service/guardduty/api_op_UpdateOrganizationConfiguration.go
@@ -53,8 +53,10 @@ type UpdateOrganizationConfigurationInput struct {
// This member is required.
DetectorId *string
- // Represents whether or not to automatically enable member accounts in the
- // organization.
+ // Represents whether to automatically enable member accounts in the organization.
+ // This applies to only new member accounts, not the existing member accounts. When
+ // a new account joins the organization, the chosen features will be enabled for
+ // them by default.
//
// Even though this is still supported, we recommend using
// AutoEnableOrganizationMembers to achieve the similar results. You must provide a
diff --git a/service/guardduty/go_module_metadata.go b/service/guardduty/go_module_metadata.go
index ec4a6228a65..366b47e13b6 100644
--- a/service/guardduty/go_module_metadata.go
+++ b/service/guardduty/go_module_metadata.go
@@ -3,4 +3,4 @@
package guardduty
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.52.0"
+const goModuleVersion = "1.52.1"
diff --git a/service/guardduty/types/types.go b/service/guardduty/types/types.go
index bd4aa5c7609..c57a801ef47 100644
--- a/service/guardduty/types/types.go
+++ b/service/guardduty/types/types.go
@@ -2371,6 +2371,9 @@ type Organization struct {
// A list of additional configurations which will be configured for the
// organization.
+//
+// Additional configuration applies to only GuardDuty Runtime Monitoring
+// protection plan.
type OrganizationAdditionalConfiguration struct {
// The status of the additional configuration that will be configured for the
@@ -2394,7 +2397,8 @@ type OrganizationAdditionalConfiguration struct {
AutoEnable OrgFeatureStatus
// The name of the additional configuration that will be configured for the
- // organization.
+ // organization. These values are applicable to only Runtime Monitoring protection
+ // plan.
Name OrgFeatureAdditionalConfiguration
noSmithyDocumentSerde
@@ -2425,7 +2429,8 @@ type OrganizationAdditionalConfigurationResult struct {
AutoEnable OrgFeatureStatus
// The name of the additional configuration that is configured for the member
- // accounts within the organization.
+ // accounts within the organization. These values are applicable to only Runtime
+ // Monitoring protection plan.
Name OrgFeatureAdditionalConfiguration
noSmithyDocumentSerde
@@ -3454,7 +3459,8 @@ type S3ObjectDetail struct {
noSmithyDocumentSerde
}
-// Contains information about a malware scan.
+// Contains information about malware scans associated with GuardDuty Malware
+// Protection for EC2.
type Scan struct {
// The ID for the account that belongs to the scan.
@@ -3473,7 +3479,7 @@ type Scan struct {
// List of volumes that were attached to the original instance to be scanned.
AttachedVolumes []VolumeDetail
- // The unique ID of the detector that the request is associated with.
+ // The unique ID of the detector that is associated with the request.
//
// To find the detectorId in the current Region, see the Settings page in the
// GuardDuty console, or run the [ListDetectors]API.
diff --git a/service/internal/integrationtest/go.mod b/service/internal/integrationtest/go.mod
index 6204b0415c3..84955e71a4b 100644
--- a/service/internal/integrationtest/go.mod
+++ b/service/internal/integrationtest/go.mod
@@ -9,29 +9,29 @@ require (
github.com/aws/aws-sdk-go-v2/service/applicationautoscaling v1.34.2
github.com/aws/aws-sdk-go-v2/service/applicationdiscoveryservice v1.29.1
github.com/aws/aws-sdk-go-v2/service/appstream v1.41.7
- github.com/aws/aws-sdk-go-v2/service/athena v1.49.0
- github.com/aws/aws-sdk-go-v2/service/batch v1.48.2
+ github.com/aws/aws-sdk-go-v2/service/athena v1.49.1
+ github.com/aws/aws-sdk-go-v2/service/batch v1.49.0
github.com/aws/aws-sdk-go-v2/service/cloudformation v1.56.1
- github.com/aws/aws-sdk-go-v2/service/cloudfront v1.43.1
- github.com/aws/aws-sdk-go-v2/service/cloudhsmv2 v1.27.8
+ github.com/aws/aws-sdk-go-v2/service/cloudfront v1.44.0
+ github.com/aws/aws-sdk-go-v2/service/cloudhsmv2 v1.28.0
github.com/aws/aws-sdk-go-v2/service/cloudtrail v1.46.3
github.com/aws/aws-sdk-go-v2/service/cloudwatch v1.43.3
- github.com/aws/aws-sdk-go-v2/service/codebuild v1.49.2
+ github.com/aws/aws-sdk-go-v2/service/codebuild v1.49.3
github.com/aws/aws-sdk-go-v2/service/codecommit v1.27.7
github.com/aws/aws-sdk-go-v2/service/codedeploy v1.29.7
- github.com/aws/aws-sdk-go-v2/service/codepipeline v1.37.1
+ github.com/aws/aws-sdk-go-v2/service/codepipeline v1.38.0
github.com/aws/aws-sdk-go-v2/service/cognitoidentityprovider v1.48.1
github.com/aws/aws-sdk-go-v2/service/configservice v1.51.1
github.com/aws/aws-sdk-go-v2/service/costandusagereportservice v1.28.7
- github.com/aws/aws-sdk-go-v2/service/databasemigrationservice v1.44.5
+ github.com/aws/aws-sdk-go-v2/service/databasemigrationservice v1.45.0
github.com/aws/aws-sdk-go-v2/service/devicefarm v1.28.7
github.com/aws/aws-sdk-go-v2/service/directconnect v1.30.1
github.com/aws/aws-sdk-go-v2/service/directoryservice v1.30.8
github.com/aws/aws-sdk-go-v2/service/docdb v1.39.6
github.com/aws/aws-sdk-go-v2/service/dynamodb v1.38.0
- github.com/aws/aws-sdk-go-v2/service/ec2 v1.196.0
+ github.com/aws/aws-sdk-go-v2/service/ec2 v1.198.0
github.com/aws/aws-sdk-go-v2/service/ecr v1.36.7
- github.com/aws/aws-sdk-go-v2/service/ecs v1.52.2
+ github.com/aws/aws-sdk-go-v2/service/ecs v1.53.0
github.com/aws/aws-sdk-go-v2/service/efs v1.34.1
github.com/aws/aws-sdk-go-v2/service/elasticache v1.44.1
github.com/aws/aws-sdk-go-v2/service/elasticbeanstalk v1.28.7
@@ -43,11 +43,11 @@ require (
github.com/aws/aws-sdk-go-v2/service/firehose v1.35.2
github.com/aws/aws-sdk-go-v2/service/gamelift v1.37.2
github.com/aws/aws-sdk-go-v2/service/glacier v1.26.7
- github.com/aws/aws-sdk-go-v2/service/glue v1.103.0
+ github.com/aws/aws-sdk-go-v2/service/glue v1.104.0
github.com/aws/aws-sdk-go-v2/service/health v1.29.1
github.com/aws/aws-sdk-go-v2/service/iam v1.38.2
github.com/aws/aws-sdk-go-v2/service/inspector v1.25.7
- github.com/aws/aws-sdk-go-v2/service/iot v1.61.1
+ github.com/aws/aws-sdk-go-v2/service/iot v1.62.0
github.com/aws/aws-sdk-go-v2/service/kinesis v1.32.7
github.com/aws/aws-sdk-go-v2/service/kms v1.37.7
github.com/aws/aws-sdk-go-v2/service/lambda v1.69.1
@@ -56,11 +56,11 @@ require (
github.com/aws/aws-sdk-go-v2/service/neptune v1.35.6
github.com/aws/aws-sdk-go-v2/service/pinpointemail v1.23.7
github.com/aws/aws-sdk-go-v2/service/polly v1.45.8
- github.com/aws/aws-sdk-go-v2/service/rds v1.92.0
+ github.com/aws/aws-sdk-go-v2/service/rds v1.93.0
github.com/aws/aws-sdk-go-v2/service/redshift v1.53.0
github.com/aws/aws-sdk-go-v2/service/rekognition v1.45.8
github.com/aws/aws-sdk-go-v2/service/route53 v1.46.3
- github.com/aws/aws-sdk-go-v2/service/route53domains v1.27.7
+ github.com/aws/aws-sdk-go-v2/service/route53domains v1.28.0
github.com/aws/aws-sdk-go-v2/service/route53resolver v1.34.2
github.com/aws/aws-sdk-go-v2/service/s3 v1.71.0
github.com/aws/aws-sdk-go-v2/service/s3control v1.52.0
diff --git a/service/internetmonitor/CHANGELOG.md b/service/internetmonitor/CHANGELOG.md
index 4995b396dd1..a2633ff07b7 100644
--- a/service/internetmonitor/CHANGELOG.md
+++ b/service/internetmonitor/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.20.3 (2024-12-17)
+
+* No change notes available for this release.
+
# v1.20.2 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/internetmonitor/go_module_metadata.go b/service/internetmonitor/go_module_metadata.go
index 0fdb60bd5b5..3777f5fd81a 100644
--- a/service/internetmonitor/go_module_metadata.go
+++ b/service/internetmonitor/go_module_metadata.go
@@ -3,4 +3,4 @@
package internetmonitor
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.20.2"
+const goModuleVersion = "1.20.3"
diff --git a/service/internetmonitor/internal/endpoints/endpoints.go b/service/internetmonitor/internal/endpoints/endpoints.go
index 35afd447390..d9731dae51e 100644
--- a/service/internetmonitor/internal/endpoints/endpoints.go
+++ b/service/internetmonitor/internal/endpoints/endpoints.go
@@ -144,56 +144,122 @@ var defaultPartitions = endpoints.Partitions{
}: endpoints.Endpoint{
Hostname: "internetmonitor.af-south-1.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "af-south-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.af-south-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-east-1",
}: endpoints.Endpoint{
Hostname: "internetmonitor.ap-east-1.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "ap-east-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.ap-east-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-northeast-1",
}: endpoints.Endpoint{
Hostname: "internetmonitor.ap-northeast-1.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "ap-northeast-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.ap-northeast-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-northeast-2",
}: endpoints.Endpoint{
Hostname: "internetmonitor.ap-northeast-2.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "ap-northeast-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.ap-northeast-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-northeast-3",
}: endpoints.Endpoint{
Hostname: "internetmonitor.ap-northeast-3.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "ap-northeast-3",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.ap-northeast-3.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-south-1",
}: endpoints.Endpoint{
Hostname: "internetmonitor.ap-south-1.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "ap-south-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.ap-south-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-south-2",
}: endpoints.Endpoint{
Hostname: "internetmonitor.ap-south-2.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "ap-south-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.ap-south-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-1",
}: endpoints.Endpoint{
Hostname: "internetmonitor.ap-southeast-1.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.ap-southeast-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-2",
}: endpoints.Endpoint{
Hostname: "internetmonitor.ap-southeast-2.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.ap-southeast-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-3",
}: endpoints.Endpoint{
Hostname: "internetmonitor.ap-southeast-3.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-3",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.ap-southeast-3.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-4",
}: endpoints.Endpoint{
Hostname: "internetmonitor.ap-southeast-4.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-4",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.ap-southeast-4.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-5",
}: endpoints.Endpoint{
@@ -210,6 +276,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "internetmonitor-fips.ca-central-1.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "ca-central-1",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor-fips.ca-central-1.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "ca-central-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.ca-central-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ca-west-1",
}: endpoints.Endpoint{
@@ -220,41 +298,89 @@ var defaultPartitions = endpoints.Partitions{
}: endpoints.Endpoint{
Hostname: "internetmonitor.eu-central-1.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "eu-central-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.eu-central-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-central-2",
}: endpoints.Endpoint{
Hostname: "internetmonitor.eu-central-2.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "eu-central-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.eu-central-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-north-1",
}: endpoints.Endpoint{
Hostname: "internetmonitor.eu-north-1.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "eu-north-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.eu-north-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-south-1",
}: endpoints.Endpoint{
Hostname: "internetmonitor.eu-south-1.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "eu-south-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.eu-south-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-south-2",
}: endpoints.Endpoint{
Hostname: "internetmonitor.eu-south-2.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "eu-south-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.eu-south-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-west-1",
}: endpoints.Endpoint{
Hostname: "internetmonitor.eu-west-1.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "eu-west-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.eu-west-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-west-2",
}: endpoints.Endpoint{
Hostname: "internetmonitor.eu-west-2.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "eu-west-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.eu-west-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-west-3",
}: endpoints.Endpoint{
Hostname: "internetmonitor.eu-west-3.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "eu-west-3",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.eu-west-3.api.aws",
+ },
endpoints.EndpointKey{
Region: "il-central-1",
}: endpoints.Endpoint{
@@ -265,16 +391,34 @@ var defaultPartitions = endpoints.Partitions{
}: endpoints.Endpoint{
Hostname: "internetmonitor.me-central-1.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "me-central-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.me-central-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "me-south-1",
}: endpoints.Endpoint{
Hostname: "internetmonitor.me-south-1.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "me-south-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.me-south-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "sa-east-1",
}: endpoints.Endpoint{
Hostname: "internetmonitor.sa-east-1.api.aws",
},
+ endpoints.EndpointKey{
+ Region: "sa-east-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.sa-east-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "us-east-1",
}: endpoints.Endpoint{
@@ -286,6 +430,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "internetmonitor-fips.us-east-1.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-east-1",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor-fips.us-east-1.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-east-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.us-east-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "us-east-2",
}: endpoints.Endpoint{
@@ -297,6 +453,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "internetmonitor-fips.us-east-2.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-east-2",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor-fips.us-east-2.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-east-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.us-east-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "us-west-1",
}: endpoints.Endpoint{
@@ -308,6 +476,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "internetmonitor-fips.us-west-1.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-west-1",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor-fips.us-west-1.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-west-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.us-west-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "us-west-2",
}: endpoints.Endpoint{
@@ -319,6 +499,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "internetmonitor-fips.us-west-2.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-west-2",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor-fips.us-west-2.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-west-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "internetmonitor.us-west-2.api.aws",
+ },
},
},
{
diff --git a/service/iot/CHANGELOG.md b/service/iot/CHANGELOG.md
index 85dc4eb3780..c14bd3b2183 100644
--- a/service/iot/CHANGELOG.md
+++ b/service/iot/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.62.0 (2024-12-18)
+
+* **Feature**: Release connectivity status query API which is a dedicated high throughput(TPS) API to query a specific device's most recent connectivity state and metadata.
+
# v1.61.1 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/iot/api_op_CreateCommand.go b/service/iot/api_op_CreateCommand.go
index f04616e1bfd..9b9892a2640 100644
--- a/service/iot/api_op_CreateCommand.go
+++ b/service/iot/api_op_CreateCommand.go
@@ -62,7 +62,10 @@ type CreateCommandInput struct {
// specify the payload content type.
Payload *types.CommandPayload
- // The IAM role that allows access to create the command.
+ // The IAM role that you must provide when using the AWS-IoT-FleetWise namespace.
+ // The role grants IoT Device Management the permission to access IoT FleetWise
+ // resources for generating the payload for the command. This field is not required
+ // when you use the AWS-IoT namespace.
RoleArn *string
// Name-value pairs that are used as metadata to manage a command.
diff --git a/service/iot/api_op_GetCommand.go b/service/iot/api_op_GetCommand.go
index a035fb5b47a..33f5e788be6 100644
--- a/service/iot/api_op_GetCommand.go
+++ b/service/iot/api_op_GetCommand.go
@@ -74,7 +74,8 @@ type GetCommandOutput struct {
// Indicates whether the command is being deleted.
PendingDeletion *bool
- // The IAM role that allows access to retrieve information about the command.
+ // The IAM role that you provided when creating the command with AWS-IoT-FleetWise
+ // as the namespace.
RoleArn *string
// Metadata pertaining to the operation's result.
diff --git a/service/iot/api_op_GetCommandExecution.go b/service/iot/api_op_GetCommandExecution.go
index ac798f7fb3e..393c3072a54 100644
--- a/service/iot/api_op_GetCommandExecution.go
+++ b/service/iot/api_op_GetCommandExecution.go
@@ -104,7 +104,8 @@ type GetCommandExecutionOutput struct {
// is being performed.
TargetArn *string
- // The time to live (TTL) parameter for the GetCommandExecution API.
+ // The time to live (TTL) parameter that indicates the duration for which
+ // executions will be retained in your account. The default value is six months.
TimeToLive *time.Time
// Metadata pertaining to the operation's result.
diff --git a/service/iot/api_op_GetThingConnectivityData.go b/service/iot/api_op_GetThingConnectivityData.go
new file mode 100644
index 00000000000..b92fe445293
--- /dev/null
+++ b/service/iot/api_op_GetThingConnectivityData.go
@@ -0,0 +1,167 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package iot
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/iot/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+ "time"
+)
+
+// Retrieves the live connectivity status per device.
+func (c *Client) GetThingConnectivityData(ctx context.Context, params *GetThingConnectivityDataInput, optFns ...func(*Options)) (*GetThingConnectivityDataOutput, error) {
+ if params == nil {
+ params = &GetThingConnectivityDataInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "GetThingConnectivityData", params, optFns, c.addOperationGetThingConnectivityDataMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*GetThingConnectivityDataOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type GetThingConnectivityDataInput struct {
+
+ // The name of your IoT thing.
+ //
+ // This member is required.
+ ThingName *string
+
+ noSmithyDocumentSerde
+}
+
+type GetThingConnectivityDataOutput struct {
+
+ // A Boolean that indicates the connectivity status.
+ Connected *bool
+
+ // The reason why the client is disconnecting.
+ DisconnectReason types.DisconnectReasonValue
+
+ // The name of your IoT thing.
+ ThingName *string
+
+ // The timestamp of when the event occurred.
+ Timestamp *time.Time
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationGetThingConnectivityDataMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsRestjson1_serializeOpGetThingConnectivityData{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsRestjson1_deserializeOpGetThingConnectivityData{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "GetThingConnectivityData"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpGetThingConnectivityDataValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opGetThingConnectivityData(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opGetThingConnectivityData(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "GetThingConnectivityData",
+ }
+}
diff --git a/service/iot/api_op_ListCommandExecutions.go b/service/iot/api_op_ListCommandExecutions.go
index a7a45dbd9b2..cf64f91f461 100644
--- a/service/iot/api_op_ListCommandExecutions.go
+++ b/service/iot/api_op_ListCommandExecutions.go
@@ -13,10 +13,18 @@ import (
// List all command executions.
//
-// You must provide only the startedTimeFilter or the completedTimeFilter
-// information. If you provide both time filters, the API will generate an error.
-// You can use this information to find command executions that started within a
-// specific timeframe.
+// - You must provide only the startedTimeFilter or the completedTimeFilter
+// information. If you provide both time filters, the API will generate an error.
+// You can use this information to retrieve a list of command executions within a
+// specific timeframe.
+//
+// - You must provide only the commandArn or the thingArn information depending
+// on whether you want to list executions for a specific command or an IoT thing.
+// If you provide both fields, the API will generate an error.
+//
+// For more information about considerations for using this API, see [List command executions in your account (CLI)].
+//
+// [List command executions in your account (CLI)]: https://docs.aws.amazon.com/iot/latest/developerguide/iot-remote-command-execution-start-monitor.html#iot-remote-command-execution-list-cli
func (c *Client) ListCommandExecutions(ctx context.Context, params *ListCommandExecutionsInput, optFns ...func(*Options)) (*ListCommandExecutionsOutput, error) {
if params == nil {
params = &ListCommandExecutionsInput{}
diff --git a/service/iot/deserializers.go b/service/iot/deserializers.go
index 208f0ca3f6d..56938f0c524 100644
--- a/service/iot/deserializers.go
+++ b/service/iot/deserializers.go
@@ -23133,6 +23133,214 @@ func awsRestjson1_deserializeOpDocumentGetStatisticsOutput(v **GetStatisticsOutp
return nil
}
+type awsRestjson1_deserializeOpGetThingConnectivityData struct {
+}
+
+func (*awsRestjson1_deserializeOpGetThingConnectivityData) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpGetThingConnectivityData) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsRestjson1_deserializeOpErrorGetThingConnectivityData(response, &metadata)
+ }
+ output := &GetThingConnectivityDataOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsRestjson1_deserializeOpDocumentGetThingConnectivityDataOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ return out, metadata, &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ }
+
+ span.End()
+ return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorGetThingConnectivityData(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+ if len(headerCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(headerCode)
+ }
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ jsonCode, message, err := restjson.GetErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if len(headerCode) == 0 && len(jsonCode) != 0 {
+ errorCode = restjson.SanitizeErrorCode(jsonCode)
+ }
+ if len(message) != 0 {
+ errorMessage = message
+ }
+
+ switch {
+ case strings.EqualFold("IndexNotReadyException", errorCode):
+ return awsRestjson1_deserializeErrorIndexNotReadyException(response, errorBody)
+
+ case strings.EqualFold("InternalFailureException", errorCode):
+ return awsRestjson1_deserializeErrorInternalFailureException(response, errorBody)
+
+ case strings.EqualFold("InvalidRequestException", errorCode):
+ return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+ case strings.EqualFold("ResourceNotFoundException", errorCode):
+ return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+ case strings.EqualFold("ServiceUnavailableException", errorCode):
+ return awsRestjson1_deserializeErrorServiceUnavailableException(response, errorBody)
+
+ case strings.EqualFold("ThrottlingException", errorCode):
+ return awsRestjson1_deserializeErrorThrottlingException(response, errorBody)
+
+ case strings.EqualFold("UnauthorizedException", errorCode):
+ return awsRestjson1_deserializeErrorUnauthorizedException(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+func awsRestjson1_deserializeOpDocumentGetThingConnectivityDataOutput(v **GetThingConnectivityDataOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *GetThingConnectivityDataOutput
+ if *v == nil {
+ sv = &GetThingConnectivityDataOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "connected":
+ if value != nil {
+ jtv, ok := value.(bool)
+ if !ok {
+ return fmt.Errorf("expected Boolean to be of type *bool, got %T instead", value)
+ }
+ sv.Connected = ptr.Bool(jtv)
+ }
+
+ case "disconnectReason":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected DisconnectReasonValue to be of type string, got %T instead", value)
+ }
+ sv.DisconnectReason = types.DisconnectReasonValue(jtv)
+ }
+
+ case "thingName":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ConnectivityApiThingName to be of type string, got %T instead", value)
+ }
+ sv.ThingName = ptr.String(jtv)
+ }
+
+ case "timestamp":
+ if value != nil {
+ switch jtv := value.(type) {
+ case json.Number:
+ f64, err := jtv.Float64()
+ if err != nil {
+ return err
+ }
+ sv.Timestamp = ptr.Time(smithytime.ParseEpochSeconds(f64))
+
+ default:
+ return fmt.Errorf("expected Timestamp to be a JSON Number, got %T instead", value)
+
+ }
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
type awsRestjson1_deserializeOpGetTopicRule struct {
}
diff --git a/service/iot/generated.json b/service/iot/generated.json
index b6427d91c0f..18bda9868d7 100644
--- a/service/iot/generated.json
+++ b/service/iot/generated.json
@@ -153,6 +153,7 @@
"api_op_GetPolicyVersion.go",
"api_op_GetRegistrationCode.go",
"api_op_GetStatistics.go",
+ "api_op_GetThingConnectivityData.go",
"api_op_GetTopicRule.go",
"api_op_GetTopicRuleDestination.go",
"api_op_GetV2LoggingOptions.go",
diff --git a/service/iot/go_module_metadata.go b/service/iot/go_module_metadata.go
index 83d5a1f6b3b..683c8d1afa7 100644
--- a/service/iot/go_module_metadata.go
+++ b/service/iot/go_module_metadata.go
@@ -3,4 +3,4 @@
package iot
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.61.1"
+const goModuleVersion = "1.62.0"
diff --git a/service/iot/serializers.go b/service/iot/serializers.go
index 56e1a365473..b5a52d443b6 100644
--- a/service/iot/serializers.go
+++ b/service/iot/serializers.go
@@ -12276,6 +12276,77 @@ func awsRestjson1_serializeOpDocumentGetStatisticsInput(v *GetStatisticsInput, v
return nil
}
+type awsRestjson1_serializeOpGetThingConnectivityData struct {
+}
+
+func (*awsRestjson1_serializeOpGetThingConnectivityData) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpGetThingConnectivityData) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*GetThingConnectivityDataInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ opPath, opQuery := httpbinding.SplitURI("/things/{thingName}/connectivity-data")
+ request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+ request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+ request.Method = "POST"
+ var restEncoder *httpbinding.Encoder
+ if request.URL.RawPath == "" {
+ restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ } else {
+ request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+ restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+ }
+
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if err := awsRestjson1_serializeOpHttpBindingsGetThingConnectivityDataInput(input, restEncoder); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsGetThingConnectivityDataInput(v *GetThingConnectivityDataInput, encoder *httpbinding.Encoder) error {
+ if v == nil {
+ return fmt.Errorf("unsupported serialization of nil %T", v)
+ }
+
+ if v.ThingName == nil || len(*v.ThingName) == 0 {
+ return &smithy.SerializationError{Err: fmt.Errorf("input member thingName must not be empty")}
+ }
+ if v.ThingName != nil {
+ if err := encoder.SetURI("thingName").String(*v.ThingName); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
type awsRestjson1_serializeOpGetTopicRule struct {
}
diff --git a/service/iot/snapshot/api_op_GetThingConnectivityData.go.snap b/service/iot/snapshot/api_op_GetThingConnectivityData.go.snap
new file mode 100644
index 00000000000..6456540b229
--- /dev/null
+++ b/service/iot/snapshot/api_op_GetThingConnectivityData.go.snap
@@ -0,0 +1,41 @@
+GetThingConnectivityData
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/iot/snapshot_test.go b/service/iot/snapshot_test.go
index a58d274958a..866a8999d33 100644
--- a/service/iot/snapshot_test.go
+++ b/service/iot/snapshot_test.go
@@ -1802,6 +1802,18 @@ func TestCheckSnapshot_GetStatistics(t *testing.T) {
}
}
+func TestCheckSnapshot_GetThingConnectivityData(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetThingConnectivityData(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "GetThingConnectivityData")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_GetTopicRule(t *testing.T) {
svc := New(Options{})
_, err := svc.GetTopicRule(context.Background(), nil, func(o *Options) {
@@ -5029,6 +5041,18 @@ func TestUpdateSnapshot_GetStatistics(t *testing.T) {
}
}
+func TestUpdateSnapshot_GetThingConnectivityData(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetThingConnectivityData(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "GetThingConnectivityData")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_GetTopicRule(t *testing.T) {
svc := New(Options{})
_, err := svc.GetTopicRule(context.Background(), nil, func(o *Options) {
diff --git a/service/iot/types/enums.go b/service/iot/types/enums.go
index de96224adb7..9ae0f3db2f0 100644
--- a/service/iot/types/enums.go
+++ b/service/iot/types/enums.go
@@ -840,6 +840,49 @@ func (DimensionValueOperator) Values() []DimensionValueOperator {
}
}
+type DisconnectReasonValue string
+
+// Enum values for DisconnectReasonValue
+const (
+ DisconnectReasonValueAuthError DisconnectReasonValue = "AUTH_ERROR"
+ DisconnectReasonValueClientInitiatedDisconnect DisconnectReasonValue = "CLIENT_INITIATED_DISCONNECT"
+ DisconnectReasonValueClientError DisconnectReasonValue = "CLIENT_ERROR"
+ DisconnectReasonValueConnectionLost DisconnectReasonValue = "CONNECTION_LOST"
+ DisconnectReasonValueDuplicateClientid DisconnectReasonValue = "DUPLICATE_CLIENTID"
+ DisconnectReasonValueForbiddenAccess DisconnectReasonValue = "FORBIDDEN_ACCESS"
+ DisconnectReasonValueMqttKeepAliveTimeout DisconnectReasonValue = "MQTT_KEEP_ALIVE_TIMEOUT"
+ DisconnectReasonValueServerError DisconnectReasonValue = "SERVER_ERROR"
+ DisconnectReasonValueServerInitiatedDisconnect DisconnectReasonValue = "SERVER_INITIATED_DISCONNECT"
+ DisconnectReasonValueThrottled DisconnectReasonValue = "THROTTLED"
+ DisconnectReasonValueWebsocketTtlExpiration DisconnectReasonValue = "WEBSOCKET_TTL_EXPIRATION"
+ DisconnectReasonValueCustomauthTtlExpiration DisconnectReasonValue = "CUSTOMAUTH_TTL_EXPIRATION"
+ DisconnectReasonValueUnknown DisconnectReasonValue = "UNKNOWN"
+ DisconnectReasonValueNone DisconnectReasonValue = "NONE"
+)
+
+// Values returns all known values for DisconnectReasonValue. Note that this can
+// be expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (DisconnectReasonValue) Values() []DisconnectReasonValue {
+ return []DisconnectReasonValue{
+ "AUTH_ERROR",
+ "CLIENT_INITIATED_DISCONNECT",
+ "CLIENT_ERROR",
+ "CONNECTION_LOST",
+ "DUPLICATE_CLIENTID",
+ "FORBIDDEN_ACCESS",
+ "MQTT_KEEP_ALIVE_TIMEOUT",
+ "SERVER_ERROR",
+ "SERVER_INITIATED_DISCONNECT",
+ "THROTTLED",
+ "WEBSOCKET_TTL_EXPIRATION",
+ "CUSTOMAUTH_TTL_EXPIRATION",
+ "UNKNOWN",
+ "NONE",
+ }
+}
+
type DomainConfigurationStatus string
// Enum values for DomainConfigurationStatus
diff --git a/service/iot/validators.go b/service/iot/validators.go
index 84c2d361735..e84b2793c9e 100644
--- a/service/iot/validators.go
+++ b/service/iot/validators.go
@@ -2590,6 +2590,26 @@ func (m *validateOpGetStatistics) HandleInitialize(ctx context.Context, in middl
return next.HandleInitialize(ctx, in)
}
+type validateOpGetThingConnectivityData struct {
+}
+
+func (*validateOpGetThingConnectivityData) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpGetThingConnectivityData) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*GetThingConnectivityDataInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpGetThingConnectivityDataInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpGetTopicRuleDestination struct {
}
@@ -4706,6 +4726,10 @@ func addOpGetStatisticsValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpGetStatistics{}, middleware.After)
}
+func addOpGetThingConnectivityDataValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpGetThingConnectivityData{}, middleware.After)
+}
+
func addOpGetTopicRuleDestinationValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpGetTopicRuleDestination{}, middleware.After)
}
@@ -9218,6 +9242,21 @@ func validateOpGetStatisticsInput(v *GetStatisticsInput) error {
}
}
+func validateOpGetThingConnectivityDataInput(v *GetThingConnectivityDataInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "GetThingConnectivityDataInput"}
+ if v.ThingName == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ThingName"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpGetTopicRuleDestinationInput(v *GetTopicRuleDestinationInput) error {
if v == nil {
return nil
diff --git a/service/kafka/CHANGELOG.md b/service/kafka/CHANGELOG.md
index 146aa671a35..33905af74e0 100644
--- a/service/kafka/CHANGELOG.md
+++ b/service/kafka/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.38.8 (2024-12-18)
+
+* No change notes available for this release.
+
# v1.38.7 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/kafka/go_module_metadata.go b/service/kafka/go_module_metadata.go
index df01fa94cec..f34f1bcbcae 100644
--- a/service/kafka/go_module_metadata.go
+++ b/service/kafka/go_module_metadata.go
@@ -3,4 +3,4 @@
package kafka
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.38.7"
+const goModuleVersion = "1.38.8"
diff --git a/service/kafka/internal/endpoints/endpoints.go b/service/kafka/internal/endpoints/endpoints.go
index 3d9ac29bbe0..f3abb9b1213 100644
--- a/service/kafka/internal/endpoints/endpoints.go
+++ b/service/kafka/internal/endpoints/endpoints.go
@@ -172,6 +172,9 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "ap-southeast-4",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-5",
+ }: endpoints.Endpoint{},
endpoints.EndpointKey{
Region: "ca-central-1",
}: endpoints.Endpoint{},
diff --git a/service/kinesis/internal/testing/eventstream_test.go b/service/kinesis/internal/testing/eventstream_test.go
index 8f6d4b01213..1ff6c73a13a 100644
--- a/service/kinesis/internal/testing/eventstream_test.go
+++ b/service/kinesis/internal/testing/eventstream_test.go
@@ -299,7 +299,7 @@ func TestSubscribeToShard_ReadException(t *testing.T) {
expectedErr,
&types.InternalFailureException{Message: aws.String("this is an exception message")},
); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
}
@@ -366,7 +366,7 @@ func TestSubscribeToShard_ReadUnmodeledException(t *testing.T) {
Message: "this is an unmodeled exception message",
},
); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
}
@@ -437,7 +437,7 @@ func TestSubscribeToShard_ReadErrorEvent(t *testing.T) {
Message: "An error message",
},
); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
}
@@ -478,7 +478,7 @@ func TestSubscribeToShard_ResponseError(t *testing.T) {
Message: "this is an exception message",
},
); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
}
diff --git a/service/lakeformation/CHANGELOG.md b/service/lakeformation/CHANGELOG.md
index 959a911609e..53b201b0bb3 100644
--- a/service/lakeformation/CHANGELOG.md
+++ b/service/lakeformation/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.39.1 (2024-12-17)
+
+* No change notes available for this release.
+
# v1.39.0 (2024-12-03.2)
* **Feature**: This release added two new LakeFormation Permissions (CREATE_CATALOG, SUPER_USER) and added Id field for CatalogResource. It also added new conditon and expression field.
diff --git a/service/lakeformation/go_module_metadata.go b/service/lakeformation/go_module_metadata.go
index 4a81b42ff91..6d0b355a04a 100644
--- a/service/lakeformation/go_module_metadata.go
+++ b/service/lakeformation/go_module_metadata.go
@@ -3,4 +3,4 @@
package lakeformation
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.39.0"
+const goModuleVersion = "1.39.1"
diff --git a/service/lakeformation/internal/endpoints/endpoints.go b/service/lakeformation/internal/endpoints/endpoints.go
index 62a7701aae9..82d924c6813 100644
--- a/service/lakeformation/internal/endpoints/endpoints.go
+++ b/service/lakeformation/internal/endpoints/endpoints.go
@@ -142,69 +142,201 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "af-south-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "af-south-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.af-south-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-east-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-east-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.ap-east-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-northeast-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-northeast-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.ap-northeast-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-northeast-2",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-northeast-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.ap-northeast-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-northeast-3",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-northeast-3",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.ap-northeast-3.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-south-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-south-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.ap-south-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-south-2",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-south-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.ap-south-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.ap-southeast-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-2",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.ap-southeast-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-3",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-3",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.ap-southeast-3.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-4",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-4",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.ap-southeast-4.api.aws",
+ },
endpoints.EndpointKey{
Region: "ap-southeast-5",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-5",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.ap-southeast-5.api.aws",
+ },
endpoints.EndpointKey{
Region: "ca-central-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ca-central-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.ca-central-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "ca-west-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ca-west-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.ca-west-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-central-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-central-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.eu-central-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-central-2",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-central-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.eu-central-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-north-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-north-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.eu-north-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-south-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-south-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.eu-south-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-south-2",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-south-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.eu-south-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-west-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-west-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.eu-west-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-west-2",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-west-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.eu-west-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "eu-west-3",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-west-3",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.eu-west-3.api.aws",
+ },
endpoints.EndpointKey{
Region: "fips-us-east-1",
}: endpoints.Endpoint{
@@ -244,15 +376,39 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "il-central-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "il-central-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.il-central-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "me-central-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "me-central-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.me-central-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "me-south-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "me-south-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.me-south-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "sa-east-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "sa-east-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.sa-east-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "us-east-1",
}: endpoints.Endpoint{},
@@ -262,6 +418,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "lakeformation-fips.us-east-1.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-east-1",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation-fips.us-east-1.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-east-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.us-east-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "us-east-2",
}: endpoints.Endpoint{},
@@ -271,6 +439,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "lakeformation-fips.us-east-2.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-east-2",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation-fips.us-east-2.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-east-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.us-east-2.api.aws",
+ },
endpoints.EndpointKey{
Region: "us-west-1",
}: endpoints.Endpoint{},
@@ -280,6 +460,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "lakeformation-fips.us-west-1.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-west-1",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation-fips.us-west-1.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-west-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.us-west-1.api.aws",
+ },
endpoints.EndpointKey{
Region: "us-west-2",
}: endpoints.Endpoint{},
@@ -289,6 +481,18 @@ var defaultPartitions = endpoints.Partitions{
}: {
Hostname: "lakeformation-fips.us-west-2.amazonaws.com",
},
+ endpoints.EndpointKey{
+ Region: "us-west-2",
+ Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation-fips.us-west-2.api.aws",
+ },
+ endpoints.EndpointKey{
+ Region: "us-west-2",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.us-west-2.api.aws",
+ },
},
},
{
@@ -329,9 +533,21 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "cn-north-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "cn-north-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.cn-north-1.api.amazonwebservices.com.cn",
+ },
endpoints.EndpointKey{
Region: "cn-northwest-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "cn-northwest-1",
+ Variant: endpoints.DualStackVariant,
+ }: {
+ Hostname: "lakeformation.cn-northwest-1.api.amazonwebservices.com.cn",
+ },
},
},
{
diff --git a/service/m2/CHANGELOG.md b/service/m2/CHANGELOG.md
index 9093cb625ae..6c817bebaa6 100644
--- a/service/m2/CHANGELOG.md
+++ b/service/m2/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.19.0 (2024-12-17)
+
+* **Feature**: This release adds support for AWS Mainframe Modernization(M2) Service to allow specifying network type(ipv4, dual) for the environment instances. For dual network type, m2 environment applications will serve both IPv4 and IPv6 requests, whereas for ipv4 it will serve only IPv4 requests.
+
# v1.18.5 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/m2/api_op_CreateEnvironment.go b/service/m2/api_op_CreateEnvironment.go
index f5241184670..9a68fc821b5 100644
--- a/service/m2/api_op_CreateEnvironment.go
+++ b/service/m2/api_op_CreateEnvironment.go
@@ -63,6 +63,9 @@ type CreateEnvironmentInput struct {
// The identifier of a customer managed key.
KmsKeyId *string
+ // The network type required for the runtime environment.
+ NetworkType types.NetworkType
+
// Configures the maintenance window that you want for the runtime environment.
// The maintenance window must have the format ddd:hh24:mi-ddd:hh24:mi and must be
// less than 24 hours. The following two examples are valid maintenance windows:
diff --git a/service/m2/api_op_GetEnvironment.go b/service/m2/api_op_GetEnvironment.go
index 7af1ff48287..2403e392f4b 100644
--- a/service/m2/api_op_GetEnvironment.go
+++ b/service/m2/api_op_GetEnvironment.go
@@ -117,6 +117,9 @@ type GetEnvironmentOutput struct {
// environment.
LoadBalancerArn *string
+ // The network type supported by the runtime environment.
+ NetworkType types.NetworkType
+
// Indicates the pending maintenance scheduled on this environment.
PendingMaintenance *types.PendingMaintenance
diff --git a/service/m2/deserializers.go b/service/m2/deserializers.go
index 814e20bff4f..b78b5aa67ff 100644
--- a/service/m2/deserializers.go
+++ b/service/m2/deserializers.go
@@ -2946,6 +2946,15 @@ func awsRestjson1_deserializeOpDocumentGetEnvironmentOutput(v **GetEnvironmentOu
sv.Name = ptr.String(jtv)
}
+ case "networkType":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected NetworkType to be of type string, got %T instead", value)
+ }
+ sv.NetworkType = types.NetworkType(jtv)
+ }
+
case "pendingMaintenance":
if err := awsRestjson1_deserializeDocumentPendingMaintenance(&sv.PendingMaintenance, value); err != nil {
return err
@@ -8102,6 +8111,15 @@ func awsRestjson1_deserializeDocumentEnvironmentSummary(v **types.EnvironmentSum
sv.Name = ptr.String(jtv)
}
+ case "networkType":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected NetworkType to be of type string, got %T instead", value)
+ }
+ sv.NetworkType = types.NetworkType(jtv)
+ }
+
case "status":
if value != nil {
jtv, ok := value.(string)
diff --git a/service/m2/go_module_metadata.go b/service/m2/go_module_metadata.go
index d434fc250d6..ca8f6051113 100644
--- a/service/m2/go_module_metadata.go
+++ b/service/m2/go_module_metadata.go
@@ -3,4 +3,4 @@
package m2
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.18.5"
+const goModuleVersion = "1.19.0"
diff --git a/service/m2/serializers.go b/service/m2/serializers.go
index 643a807a256..3960f77515c 100644
--- a/service/m2/serializers.go
+++ b/service/m2/serializers.go
@@ -559,6 +559,11 @@ func awsRestjson1_serializeOpDocumentCreateEnvironmentInput(v *CreateEnvironment
ok.String(*v.Name)
}
+ if len(v.NetworkType) > 0 {
+ ok := object.Key("networkType")
+ ok.String(string(v.NetworkType))
+ }
+
if v.PreferredMaintenanceWindow != nil {
ok := object.Key("preferredMaintenanceWindow")
ok.String(*v.PreferredMaintenanceWindow)
diff --git a/service/m2/types/enums.go b/service/m2/types/enums.go
index dddd58c6b0a..b316efbcd65 100644
--- a/service/m2/types/enums.go
+++ b/service/m2/types/enums.go
@@ -228,6 +228,25 @@ func (EnvironmentLifecycle) Values() []EnvironmentLifecycle {
}
}
+type NetworkType string
+
+// Enum values for NetworkType
+const (
+ NetworkTypeIpv4 NetworkType = "ipv4"
+ NetworkTypeDual NetworkType = "dual"
+)
+
+// Values returns all known values for NetworkType. Note that this can be expanded
+// in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (NetworkType) Values() []NetworkType {
+ return []NetworkType{
+ "ipv4",
+ "dual",
+ }
+}
+
type ValidationExceptionReason string
// Enum values for ValidationExceptionReason
diff --git a/service/m2/types/types.go b/service/m2/types/types.go
index ac8dc73ad97..7c8e9277f00 100644
--- a/service/m2/types/types.go
+++ b/service/m2/types/types.go
@@ -662,6 +662,9 @@ type EnvironmentSummary struct {
// This member is required.
Status EnvironmentLifecycle
+ // The network type supported by the runtime environment.
+ NetworkType NetworkType
+
noSmithyDocumentSerde
}
diff --git a/service/mediaconnect/CHANGELOG.md b/service/mediaconnect/CHANGELOG.md
index 9fd5e83a577..5f89a5c4d21 100644
--- a/service/mediaconnect/CHANGELOG.md
+++ b/service/mediaconnect/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.36.0 (2024-12-13)
+
+* **Feature**: AWS Elemental MediaConnect Gateway now supports Source Specific Multicast (SSM) for ingress bridges. This enables you to specify a source IP address in addition to a multicast IP when creating or updating an ingress bridge source.
+
# v1.35.7 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/mediaconnect/deserializers.go b/service/mediaconnect/deserializers.go
index d22c135acce..b2379c428bc 100644
--- a/service/mediaconnect/deserializers.go
+++ b/service/mediaconnect/deserializers.go
@@ -10698,6 +10698,11 @@ func awsRestjson1_deserializeDocumentBridgeNetworkSource(v **types.BridgeNetwork
sv.MulticastIp = ptr.String(jtv)
}
+ case "multicastSourceSettings":
+ if err := awsRestjson1_deserializeDocumentMulticastSourceSettings(&sv.MulticastSourceSettings, value); err != nil {
+ return err
+ }
+
case "name":
if value != nil {
jtv, ok := value.(string)
@@ -13110,6 +13115,46 @@ func awsRestjson1_deserializeDocumentMonitoringConfig(v **types.MonitoringConfig
return nil
}
+func awsRestjson1_deserializeDocumentMulticastSourceSettings(v **types.MulticastSourceSettings, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.MulticastSourceSettings
+ if *v == nil {
+ sv = &types.MulticastSourceSettings{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "multicastSourceIp":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected __string to be of type string, got %T instead", value)
+ }
+ sv.MulticastSourceIp = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsRestjson1_deserializeDocumentNotFoundException(v **types.NotFoundException, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
diff --git a/service/mediaconnect/go_module_metadata.go b/service/mediaconnect/go_module_metadata.go
index 223013f930e..201886e77d0 100644
--- a/service/mediaconnect/go_module_metadata.go
+++ b/service/mediaconnect/go_module_metadata.go
@@ -3,4 +3,4 @@
package mediaconnect
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.35.7"
+const goModuleVersion = "1.36.0"
diff --git a/service/mediaconnect/serializers.go b/service/mediaconnect/serializers.go
index f46fdbdd5c1..99784be0e9c 100644
--- a/service/mediaconnect/serializers.go
+++ b/service/mediaconnect/serializers.go
@@ -4952,6 +4952,13 @@ func awsRestjson1_serializeDocumentAddBridgeNetworkSourceRequest(v *types.AddBri
ok.String(*v.MulticastIp)
}
+ if v.MulticastSourceSettings != nil {
+ ok := object.Key("multicastSourceSettings")
+ if err := awsRestjson1_serializeDocumentMulticastSourceSettings(v.MulticastSourceSettings, ok); err != nil {
+ return err
+ }
+ }
+
if v.Name != nil {
ok := object.Key("name")
ok.String(*v.Name)
@@ -5547,6 +5554,18 @@ func awsRestjson1_serializeDocumentMonitoringConfig(v *types.MonitoringConfig, v
return nil
}
+func awsRestjson1_serializeDocumentMulticastSourceSettings(v *types.MulticastSourceSettings, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.MulticastSourceIp != nil {
+ ok := object.Key("multicastSourceIp")
+ ok.String(*v.MulticastSourceIp)
+ }
+
+ return nil
+}
+
func awsRestjson1_serializeDocumentSetGatewayBridgeSourceRequest(v *types.SetGatewayBridgeSourceRequest, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
@@ -5746,6 +5765,13 @@ func awsRestjson1_serializeDocumentUpdateBridgeNetworkSourceRequest(v *types.Upd
ok.String(*v.MulticastIp)
}
+ if v.MulticastSourceSettings != nil {
+ ok := object.Key("multicastSourceSettings")
+ if err := awsRestjson1_serializeDocumentMulticastSourceSettings(v.MulticastSourceSettings, ok); err != nil {
+ return err
+ }
+ }
+
if v.NetworkName != nil {
ok := object.Key("networkName")
ok.String(*v.NetworkName)
diff --git a/service/mediaconnect/types/types.go b/service/mediaconnect/types/types.go
index c2e91d46c68..8bc010b48b8 100644
--- a/service/mediaconnect/types/types.go
+++ b/service/mediaconnect/types/types.go
@@ -94,6 +94,9 @@ type AddBridgeNetworkSourceRequest struct {
// This member is required.
Protocol Protocol
+ // The settings related to the multicast source.
+ MulticastSourceSettings *MulticastSourceSettings
+
noSmithyDocumentSerde
}
@@ -417,6 +420,9 @@ type BridgeNetworkSource struct {
// This member is required.
Protocol Protocol
+ // The settings related to the multicast source.
+ MulticastSourceSettings *MulticastSourceSettings
+
noSmithyDocumentSerde
}
@@ -1397,6 +1403,15 @@ type MonitoringConfig struct {
noSmithyDocumentSerde
}
+// The settings related to the multicast source.
+type MulticastSourceSettings struct {
+
+ // The IP address of the source for source-specific multicast (SSM).
+ MulticastSourceIp *string
+
+ noSmithyDocumentSerde
+}
+
// A savings plan that reserves a certain amount of outbound bandwidth usage at a
// discounted rate each month over a period of time.
type Offering struct {
@@ -1977,6 +1992,9 @@ type UpdateBridgeNetworkSourceRequest struct {
// The network source multicast IP.
MulticastIp *string
+ // The settings related to the multicast source.
+ MulticastSourceSettings *MulticastSourceSettings
+
// The network source's gateway network name.
NetworkName *string
diff --git a/service/medialive/CHANGELOG.md b/service/medialive/CHANGELOG.md
index 6cf02389d9d..a2de1f473fd 100644
--- a/service/medialive/CHANGELOG.md
+++ b/service/medialive/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.64.0 (2024-12-16)
+
+* **Feature**: AWS Elemental MediaLive adds three new features: MediaPackage v2 endpoint support for live stream delivery, KLV metadata passthrough in CMAF Ingest output groups, and Metadata Name Modifier in CMAF Ingest output groups for customizing metadata track names in output streams.
+
# v1.63.0 (2024-12-09)
* **Feature**: H265 outputs now support disabling the deblocking filter.
diff --git a/service/medialive/deserializers.go b/service/medialive/deserializers.go
index b815bbd2d8e..be1d2ed8468 100644
--- a/service/medialive/deserializers.go
+++ b/service/medialive/deserializers.go
@@ -29906,6 +29906,24 @@ func awsRestjson1_deserializeDocumentCmafIngestGroupSettings(v **types.CmafInges
return err
}
+ case "klvBehavior":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected CmafKLVBehavior to be of type string, got %T instead", value)
+ }
+ sv.KlvBehavior = types.CmafKLVBehavior(jtv)
+ }
+
+ case "klvNameModifier":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected __stringMax100 to be of type string, got %T instead", value)
+ }
+ sv.KlvNameModifier = ptr.String(jtv)
+ }
+
case "nielsenId3Behavior":
if value != nil {
jtv, ok := value.(string)
@@ -29915,6 +29933,24 @@ func awsRestjson1_deserializeDocumentCmafIngestGroupSettings(v **types.CmafInges
sv.NielsenId3Behavior = types.CmafNielsenId3Behavior(jtv)
}
+ case "nielsenId3NameModifier":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected __stringMax100 to be of type string, got %T instead", value)
+ }
+ sv.NielsenId3NameModifier = ptr.String(jtv)
+ }
+
+ case "scte35NameModifier":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected __stringMax100 to be of type string, got %T instead", value)
+ }
+ sv.Scte35NameModifier = ptr.String(jtv)
+ }
+
case "scte35Type":
if value != nil {
jtv, ok := value.(string)
@@ -38549,6 +38585,15 @@ func awsRestjson1_deserializeDocumentMediaPackageOutputDestinationSettings(v **t
for key, value := range shape {
switch key {
+ case "channelGroup":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected __stringMin1 to be of type string, got %T instead", value)
+ }
+ sv.ChannelGroup = ptr.String(jtv)
+ }
+
case "channelId":
if value != nil {
jtv, ok := value.(string)
@@ -38558,6 +38603,15 @@ func awsRestjson1_deserializeDocumentMediaPackageOutputDestinationSettings(v **t
sv.ChannelId = ptr.String(jtv)
}
+ case "channelName":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected __stringMin1 to be of type string, got %T instead", value)
+ }
+ sv.ChannelName = ptr.String(jtv)
+ }
+
default:
_, _ = key, value
diff --git a/service/medialive/go_module_metadata.go b/service/medialive/go_module_metadata.go
index 77252ae423f..073fbba064d 100644
--- a/service/medialive/go_module_metadata.go
+++ b/service/medialive/go_module_metadata.go
@@ -3,4 +3,4 @@
package medialive
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.63.0"
+const goModuleVersion = "1.64.0"
diff --git a/service/medialive/serializers.go b/service/medialive/serializers.go
index 184885ac03f..fca7bcbedd5 100644
--- a/service/medialive/serializers.go
+++ b/service/medialive/serializers.go
@@ -12217,11 +12217,31 @@ func awsRestjson1_serializeDocumentCmafIngestGroupSettings(v *types.CmafIngestGr
}
}
+ if len(v.KlvBehavior) > 0 {
+ ok := object.Key("klvBehavior")
+ ok.String(string(v.KlvBehavior))
+ }
+
+ if v.KlvNameModifier != nil {
+ ok := object.Key("klvNameModifier")
+ ok.String(*v.KlvNameModifier)
+ }
+
if len(v.NielsenId3Behavior) > 0 {
ok := object.Key("nielsenId3Behavior")
ok.String(string(v.NielsenId3Behavior))
}
+ if v.NielsenId3NameModifier != nil {
+ ok := object.Key("nielsenId3NameModifier")
+ ok.String(*v.NielsenId3NameModifier)
+ }
+
+ if v.Scte35NameModifier != nil {
+ ok := object.Key("scte35NameModifier")
+ ok.String(*v.Scte35NameModifier)
+ }
+
if len(v.Scte35Type) > 0 {
ok := object.Key("scte35Type")
ok.String(string(v.Scte35Type))
@@ -15434,11 +15454,21 @@ func awsRestjson1_serializeDocumentMediaPackageOutputDestinationSettings(v *type
object := value.Object()
defer object.Close()
+ if v.ChannelGroup != nil {
+ ok := object.Key("channelGroup")
+ ok.String(*v.ChannelGroup)
+ }
+
if v.ChannelId != nil {
ok := object.Key("channelId")
ok.String(*v.ChannelId)
}
+ if v.ChannelName != nil {
+ ok := object.Key("channelName")
+ ok.String(*v.ChannelName)
+ }
+
return nil
}
diff --git a/service/medialive/types/enums.go b/service/medialive/types/enums.go
index 94e57f6c358..0949121ef52 100644
--- a/service/medialive/types/enums.go
+++ b/service/medialive/types/enums.go
@@ -1181,6 +1181,25 @@ func (CmafIngestSegmentLengthUnits) Values() []CmafIngestSegmentLengthUnits {
}
}
+type CmafKLVBehavior string
+
+// Enum values for CmafKLVBehavior
+const (
+ CmafKLVBehaviorNoPassthrough CmafKLVBehavior = "NO_PASSTHROUGH"
+ CmafKLVBehaviorPassthrough CmafKLVBehavior = "PASSTHROUGH"
+)
+
+// Values returns all known values for CmafKLVBehavior. Note that this can be
+// expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (CmafKLVBehavior) Values() []CmafKLVBehavior {
+ return []CmafKLVBehavior{
+ "NO_PASSTHROUGH",
+ "PASSTHROUGH",
+ }
+}
+
type CmafNielsenId3Behavior string
// Enum values for CmafNielsenId3Behavior
diff --git a/service/medialive/types/types.go b/service/medialive/types/types.go
index 24ac11d14af..242fd492205 100644
--- a/service/medialive/types/types.go
+++ b/service/medialive/types/types.go
@@ -1510,11 +1510,36 @@ type CmafIngestGroupSettings struct {
// This member is required.
Destination *OutputLocationRef
+ // If set to passthrough, passes any KLV data from the input source to this output.
+ KlvBehavior CmafKLVBehavior
+
+ // Change the modifier that MediaLive automatically adds to the Streams() name
+ // that identifies a KLV track. The default is "klv", which means the default name
+ // will be Streams(klv.cmfm). Any string you enter here will replace the "klv"
+ // string.\nThe modifier can only contain: numbers, letters, plus (+), minus (-),
+ // underscore (_) and period (.) and has a maximum length of 100 characters.
+ KlvNameModifier *string
+
// If set to passthrough, Nielsen inaudible tones for media tracking will be
// detected in the input audio and an equivalent ID3 tag will be inserted in the
// output.
NielsenId3Behavior CmafNielsenId3Behavior
+ // Change the modifier that MediaLive automatically adds to the Streams() name
+ // that identifies a Nielsen ID3 track. The default is "nid3", which means the
+ // default name will be Streams(nid3.cmfm). Any string you enter here will replace
+ // the "nid3" string.\nThe modifier can only contain: numbers, letters, plus (+),
+ // minus (-), underscore (_) and period (.) and has a maximum length of 100
+ // characters.
+ NielsenId3NameModifier *string
+
+ // Change the modifier that MediaLive automatically adds to the Streams() name for
+ // a SCTE 35 track. The default is "scte", which means the default name will be
+ // Streams(scte.cmfm). Any string you enter here will replace the "scte"
+ // string.\nThe modifier can only contain: numbers, letters, plus (+), minus (-),
+ // underscore (_) and period (.) and has a maximum length of 100 characters.
+ Scte35NameModifier *string
+
// Type of scte35 track to add. none or scte35WithoutSegmentation
Scte35Type Scte35Type
@@ -2428,7 +2453,9 @@ type Fmp4HlsSettings struct {
// output.
NielsenId3Behavior Fmp4NielsenId3Behavior
- // When set to passthrough, timed metadata is passed through from input to output.
+ // Set to PASSTHROUGH to enable ID3 metadata insertion. To include metadata, you
+ // configure other parameters in the output group or individual outputs, or you add
+ // an ID3 action to the channel schedule.
TimedMetadataBehavior Fmp4TimedMetadataBehavior
noSmithyDocumentSerde
@@ -3435,16 +3462,19 @@ type HlsGroupSettings struct {
noSmithyDocumentSerde
}
-// Settings for the action to insert a user-defined ID3 tag in each HLS segment
+// Settings for the action to insert ID3 metadata in every segment, in HLS output
+// groups.
type HlsId3SegmentTaggingScheduleActionSettings struct {
- // Base64 string formatted according to the ID3 specification:
- // http://id3.org/id3v2.4.0-structure
+ // Complete this parameter if you want to specify the entire ID3 metadata. Enter a
+ // base64 string that contains one or more fully formed ID3 tags, according to the
+ // ID3 specification: http://id3.org/id3v2.4.0-structure
Id3 *string
- // ID3 tag to insert into each segment. Supports special keyword identifiers to
- // substitute in segment-related values.\nSupported keyword identifiers:
- // https://docs.aws.amazon.com/medialive/latest/ug/variable-data-identifiers.html
+ // Complete this parameter if you want to specify only the metadata, not the
+ // entire frame. MediaLive will insert the metadata in a TXXX frame. Enter the
+ // value as plain text. You can include standard MediaLive variable data such as
+ // the current segment number.
Tag *string
noSmithyDocumentSerde
@@ -3557,11 +3587,12 @@ type HlsSettings struct {
noSmithyDocumentSerde
}
-// Settings for the action to emit HLS metadata
+// Settings for the action to insert ID3 metadata (as a one-time action) in HLS
+// output groups.
type HlsTimedMetadataScheduleActionSettings struct {
- // Base64 string formatted according to the ID3 specification:
- // http://id3.org/id3v2.4.0-structure
+ // Enter a base64 string that contains one or more fully formed ID3 tags.See the
+ // ID3 specification: http://id3.org/id3v2.4.0-structure
//
// This member is required.
Id3 *string
@@ -4776,7 +4807,9 @@ type M3u8Settings struct {
// entered as a decimal or hexadecimal value.
Scte35Pid *string
- // When set to passthrough, timed metadata is passed through from input to output.
+ // Set to PASSTHROUGH to enable ID3 metadata insertion. To include metadata, you
+ // configure other parameters in the output group or individual outputs, or you add
+ // an ID3 action to the channel schedule.
TimedMetadataBehavior M3u8TimedMetadataBehavior
// Packet Identifier (PID) of the timed metadata stream in the transport stream.
@@ -4876,6 +4909,11 @@ type MediaPackageGroupSettings struct {
// MediaPackage Output Destination Settings
type MediaPackageOutputDestinationSettings struct {
+ // Name of the channel group in MediaPackageV2. Only use if you are sending CMAF
+ // Ingest output to a CMAF ingest endpoint on a MediaPackage channel that uses
+ // MediaPackage v2.
+ ChannelGroup *string
+
// ID of the channel in MediaPackage that is the destination for this output
// group. You do not need to specify the individual inputs in MediaPackage;
// MediaLive will handle the connection of the two MediaLive pipelines to the two
@@ -4883,6 +4921,11 @@ type MediaPackageOutputDestinationSettings struct {
// the same region.
ChannelId *string
+ // Name of the channel in MediaPackageV2. Only use if you are sending CMAF Ingest
+ // output to a CMAF ingest endpoint on a MediaPackage channel that uses
+ // MediaPackage v2.
+ ChannelName *string
+
noSmithyDocumentSerde
}
@@ -6399,10 +6442,10 @@ type ScheduleAction struct {
// Holds the settings for a single schedule action.
type ScheduleActionSettings struct {
- // Action to insert HLS ID3 segment tagging
+ // Action to insert ID3 metadata in every segment, in HLS output groups
HlsId3SegmentTaggingSettings *HlsId3SegmentTaggingScheduleActionSettings
- // Action to insert HLS metadata
+ // Action to insert ID3 metadata once, in HLS output groups
HlsTimedMetadataSettings *HlsTimedMetadataScheduleActionSettings
// Action to prepare an input for a future immediate input switch
diff --git a/service/mwaa/CHANGELOG.md b/service/mwaa/CHANGELOG.md
index 048b8b9cc9a..0682e977c41 100644
--- a/service/mwaa/CHANGELOG.md
+++ b/service/mwaa/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.33.2 (2024-12-18)
+
+* **Documentation**: Added support for Apache Airflow version 2.10.3 to MWAA.
+
# v1.33.1 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/mwaa/api_op_CreateEnvironment.go b/service/mwaa/api_op_CreateEnvironment.go
index 9209c429716..9f4b2ee219e 100644
--- a/service/mwaa/api_op_CreateEnvironment.go
+++ b/service/mwaa/api_op_CreateEnvironment.go
@@ -87,7 +87,7 @@ type CreateEnvironmentInput struct {
// defaults to the latest version. For more information, see [Apache Airflow versions on Amazon Managed Workflows for Apache Airflow (Amazon MWAA)].
//
// Valid values: 1.10.12 , 2.0.2 , 2.2.2 , 2.4.3 , 2.5.1 , 2.6.3 , 2.7.2 , 2.8.1 ,
- // 2.9.2 , and 2.10.1 .
+ // 2.9.2 , 2.10.1 , and 2.10.3 .
//
// [Apache Airflow versions on Amazon Managed Workflows for Apache Airflow (Amazon MWAA)]: https://docs.aws.amazon.com/mwaa/latest/userguide/airflow-versions.html
AirflowVersion *string
diff --git a/service/mwaa/api_op_UpdateEnvironment.go b/service/mwaa/api_op_UpdateEnvironment.go
index b1bb592133b..ed1b03f36a6 100644
--- a/service/mwaa/api_op_UpdateEnvironment.go
+++ b/service/mwaa/api_op_UpdateEnvironment.go
@@ -48,7 +48,7 @@ type UpdateEnvironmentInput struct {
// Airflow version. For more information about updating your resources, see [Upgrading an Amazon MWAA environment].
//
// Valid values: 1.10.12 , 2.0.2 , 2.2.2 , 2.4.3 , 2.5.1 , 2.6.3 , 2.7.2 , 2.8.1 ,
- // 2.9.2 , and 2.10.1 .
+ // 2.9.2 , 2.10.1 , and 2.10.3 .
//
// [Upgrading an Amazon MWAA environment]: https://docs.aws.amazon.com/mwaa/latest/userguide/upgrading-environment.html
AirflowVersion *string
diff --git a/service/mwaa/go_module_metadata.go b/service/mwaa/go_module_metadata.go
index eef85778afa..bd4ad68003a 100644
--- a/service/mwaa/go_module_metadata.go
+++ b/service/mwaa/go_module_metadata.go
@@ -3,4 +3,4 @@
package mwaa
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.33.1"
+const goModuleVersion = "1.33.2"
diff --git a/service/mwaa/types/types.go b/service/mwaa/types/types.go
index 22e5111d4a4..ecd73012b7e 100644
--- a/service/mwaa/types/types.go
+++ b/service/mwaa/types/types.go
@@ -39,7 +39,7 @@ type Environment struct {
// The Apache Airflow version on your environment.
//
// Valid values: 1.10.12 , 2.0.2 , 2.2.2 , 2.4.3 , 2.5.1 , 2.6.3 , 2.7.2 , 2.8.1 ,
- // 2.9.2 , and 2.10.1 .
+ // 2.9.2 , 2.10.1 , and 2.10.3 .
AirflowVersion *string
// The Amazon Resource Name (ARN) of the Amazon MWAA environment.
diff --git a/service/networkfirewall/CHANGELOG.md b/service/networkfirewall/CHANGELOG.md
index 92952b42fa8..8047f9e44b7 100644
--- a/service/networkfirewall/CHANGELOG.md
+++ b/service/networkfirewall/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.44.5 (2024-12-12)
+
+* No change notes available for this release.
+
# v1.44.4 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/networkfirewall/go_module_metadata.go b/service/networkfirewall/go_module_metadata.go
index 0e84594598c..cfb1398d9b8 100644
--- a/service/networkfirewall/go_module_metadata.go
+++ b/service/networkfirewall/go_module_metadata.go
@@ -3,4 +3,4 @@
package networkfirewall
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.44.4"
+const goModuleVersion = "1.44.5"
diff --git a/service/networkfirewall/internal/endpoints/endpoints.go b/service/networkfirewall/internal/endpoints/endpoints.go
index 4c38d18a9ef..0bcd06394eb 100644
--- a/service/networkfirewall/internal/endpoints/endpoints.go
+++ b/service/networkfirewall/internal/endpoints/endpoints.go
@@ -172,6 +172,9 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "ap-southeast-4",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ap-southeast-5",
+ }: endpoints.Endpoint{},
endpoints.EndpointKey{
Region: "ca-central-1",
}: endpoints.Endpoint{},
diff --git a/service/networkmanager/CHANGELOG.md b/service/networkmanager/CHANGELOG.md
index 7c3d3fd89f8..4ff172ed79f 100644
--- a/service/networkmanager/CHANGELOG.md
+++ b/service/networkmanager/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.32.2 (2024-12-13)
+
+* **Documentation**: There was a sentence fragment in UpdateDirectConnectGatewayAttachment that was causing customer confusion as to whether it's an incomplete sentence or if it was a typo. Removed the fragment.
+
# v1.32.1 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/networkmanager/api_op_UpdateDirectConnectGatewayAttachment.go b/service/networkmanager/api_op_UpdateDirectConnectGatewayAttachment.go
index 9f8418c39d9..5924c176eb6 100644
--- a/service/networkmanager/api_op_UpdateDirectConnectGatewayAttachment.go
+++ b/service/networkmanager/api_op_UpdateDirectConnectGatewayAttachment.go
@@ -37,7 +37,7 @@ type UpdateDirectConnectGatewayAttachmentInput struct {
// One or more edge locations to update for the Direct Connect gateway attachment.
// The updated array of edge locations overwrites the previous array of locations.
- // EdgeLocations is only used for Direct Connect gateway attachments. Do
+ // EdgeLocations is only used for Direct Connect gateway attachments.
EdgeLocations []string
noSmithyDocumentSerde
diff --git a/service/networkmanager/go_module_metadata.go b/service/networkmanager/go_module_metadata.go
index d33b7915f2a..86b0b2dd33b 100644
--- a/service/networkmanager/go_module_metadata.go
+++ b/service/networkmanager/go_module_metadata.go
@@ -3,4 +3,4 @@
package networkmanager
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.32.1"
+const goModuleVersion = "1.32.2"
diff --git a/service/organizations/CHANGELOG.md b/service/organizations/CHANGELOG.md
index cf9bd0410b4..2d8e34f9648 100644
--- a/service/organizations/CHANGELOG.md
+++ b/service/organizations/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.36.1 (2024-12-12)
+
+* No change notes available for this release.
+
# v1.36.0 (2024-12-02)
* **Feature**: Add support for policy operations on the DECLARATIVE_POLICY_EC2 policy type.
diff --git a/service/organizations/go_module_metadata.go b/service/organizations/go_module_metadata.go
index 8dae7a084b2..cfd529ea9dd 100644
--- a/service/organizations/go_module_metadata.go
+++ b/service/organizations/go_module_metadata.go
@@ -3,4 +3,4 @@
package organizations
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.36.0"
+const goModuleVersion = "1.36.1"
diff --git a/service/organizations/internal/endpoints/endpoints.go b/service/organizations/internal/endpoints/endpoints.go
index ec0bf3b3c25..ca2f175d598 100644
--- a/service/organizations/internal/endpoints/endpoints.go
+++ b/service/organizations/internal/endpoints/endpoints.go
@@ -253,8 +253,19 @@ var defaultPartitions = endpoints.Partitions{
SignatureVersions: []string{"v4"},
},
},
- RegionRegex: partitionRegexp.AwsIsoB,
- IsRegionalized: true,
+ RegionRegex: partitionRegexp.AwsIsoB,
+ IsRegionalized: false,
+ PartitionEndpoint: "aws-iso-b-global",
+ Endpoints: endpoints.Endpoints{
+ endpoints.EndpointKey{
+ Region: "aws-iso-b-global",
+ }: endpoints.Endpoint{
+ Hostname: "organizations.us-isob-east-1.sc2s.sgov.gov",
+ CredentialScope: endpoints.CredentialScope{
+ Region: "us-isob-east-1",
+ },
+ },
+ },
},
{
ID: "aws-iso-e",
diff --git a/service/quicksight/CHANGELOG.md b/service/quicksight/CHANGELOG.md
index ea2605f1159..1cfa2991e54 100644
--- a/service/quicksight/CHANGELOG.md
+++ b/service/quicksight/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.82.0 (2024-12-18)
+
+* **Feature**: Add support for PerformanceConfiguration attribute to Dataset entity. Allow PerformanceConfiguration specification in CreateDataset and UpdateDataset APIs.
+
# v1.81.0 (2024-12-03.2)
* **Feature**: This release includes API needed to support for Unstructured Data in Q in QuickSight Q&A (IDC).
diff --git a/service/quicksight/api_op_CreateDataSet.go b/service/quicksight/api_op_CreateDataSet.go
index 8239889bdf0..db475933f70 100644
--- a/service/quicksight/api_op_CreateDataSet.go
+++ b/service/quicksight/api_op_CreateDataSet.go
@@ -83,6 +83,10 @@ type CreateDataSetInput struct {
// tables.
LogicalTableMap map[string]types.LogicalTable
+ // The configuration for the performance optimization of the dataset that contains
+ // a UniqueKey configuration.
+ PerformanceConfiguration *types.PerformanceConfiguration
+
// A list of resource permissions on the dataset.
Permissions []types.ResourcePermission
diff --git a/service/quicksight/api_op_UpdateDataSet.go b/service/quicksight/api_op_UpdateDataSet.go
index 042a442a1a8..08c675ac5f3 100644
--- a/service/quicksight/api_op_UpdateDataSet.go
+++ b/service/quicksight/api_op_UpdateDataSet.go
@@ -79,6 +79,10 @@ type UpdateDataSetInput struct {
// tables.
LogicalTableMap map[string]types.LogicalTable
+ // The configuration for the performance optimization of the dataset that contains
+ // a UniqueKey configuration.
+ PerformanceConfiguration *types.PerformanceConfiguration
+
// The row-level security configuration for the data you want to create.
RowLevelPermissionDataSet *types.RowLevelPermissionDataSet
diff --git a/service/quicksight/deserializers.go b/service/quicksight/deserializers.go
index ab433204c4a..8325619093a 100644
--- a/service/quicksight/deserializers.go
+++ b/service/quicksight/deserializers.go
@@ -58820,6 +58820,11 @@ func awsRestjson1_deserializeDocumentDataSet(v **types.DataSet, value interface{
return err
}
+ case "PerformanceConfiguration":
+ if err := awsRestjson1_deserializeDocumentPerformanceConfiguration(&sv.PerformanceConfiguration, value); err != nil {
+ return err
+ }
+
case "PhysicalTableMap":
if err := awsRestjson1_deserializeDocumentPhysicalTableMap(&sv.PhysicalTableMap, value); err != nil {
return err
@@ -81249,6 +81254,42 @@ func awsRestjson1_deserializeDocumentPercentVisibleRange(v **types.PercentVisibl
return nil
}
+func awsRestjson1_deserializeDocumentPerformanceConfiguration(v **types.PerformanceConfiguration, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.PerformanceConfiguration
+ if *v == nil {
+ sv = &types.PerformanceConfiguration{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "UniqueKeys":
+ if err := awsRestjson1_deserializeDocumentUniqueKeyList(&sv.UniqueKeys, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsRestjson1_deserializeDocumentPeriodOverPeriodComputation(v **types.PeriodOverPeriodComputation, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -100857,6 +100898,112 @@ func awsRestjson1_deserializeDocumentUnaggregatedFieldList(v *[]types.Unaggregat
return nil
}
+func awsRestjson1_deserializeDocumentUniqueKey(v **types.UniqueKey, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.UniqueKey
+ if *v == nil {
+ sv = &types.UniqueKey{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "ColumnNames":
+ if err := awsRestjson1_deserializeDocumentUniqueKeyColumnNameList(&sv.ColumnNames, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentUniqueKeyColumnNameList(v *[]string, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []string
+ if *v == nil {
+ cv = []string{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col string
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ColumnName to be of type string, got %T instead", value)
+ }
+ col = jtv
+ }
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
+func awsRestjson1_deserializeDocumentUniqueKeyList(v *[]types.UniqueKey, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.([]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var cv []types.UniqueKey
+ if *v == nil {
+ cv = []types.UniqueKey{}
+ } else {
+ cv = *v
+ }
+
+ for _, value := range shape {
+ var col types.UniqueKey
+ destAddr := &col
+ if err := awsRestjson1_deserializeDocumentUniqueKey(&destAddr, value); err != nil {
+ return err
+ }
+ col = *destAddr
+ cv = append(cv, col)
+
+ }
+ *v = cv
+ return nil
+}
+
func awsRestjson1_deserializeDocumentUniqueValuesComputation(v **types.UniqueValuesComputation, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
diff --git a/service/quicksight/go_module_metadata.go b/service/quicksight/go_module_metadata.go
index 1bbec239d94..0d66358b29e 100644
--- a/service/quicksight/go_module_metadata.go
+++ b/service/quicksight/go_module_metadata.go
@@ -3,4 +3,4 @@
package quicksight
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.81.0"
+const goModuleVersion = "1.82.0"
diff --git a/service/quicksight/serializers.go b/service/quicksight/serializers.go
index a9387a5a325..4e9eaef8336 100644
--- a/service/quicksight/serializers.go
+++ b/service/quicksight/serializers.go
@@ -1325,6 +1325,13 @@ func awsRestjson1_serializeOpDocumentCreateDataSetInput(v *CreateDataSetInput, v
ok.String(*v.Name)
}
+ if v.PerformanceConfiguration != nil {
+ ok := object.Key("PerformanceConfiguration")
+ if err := awsRestjson1_serializeDocumentPerformanceConfiguration(v.PerformanceConfiguration, ok); err != nil {
+ return err
+ }
+ }
+
if v.Permissions != nil {
ok := object.Key("Permissions")
if err := awsRestjson1_serializeDocumentResourcePermissionList(v.Permissions, ok); err != nil {
@@ -17080,6 +17087,13 @@ func awsRestjson1_serializeOpDocumentUpdateDataSetInput(v *UpdateDataSetInput, v
ok.String(*v.Name)
}
+ if v.PerformanceConfiguration != nil {
+ ok := object.Key("PerformanceConfiguration")
+ if err := awsRestjson1_serializeDocumentPerformanceConfiguration(v.PerformanceConfiguration, ok); err != nil {
+ return err
+ }
+ }
+
if v.PhysicalTableMap != nil {
ok := object.Key("PhysicalTableMap")
if err := awsRestjson1_serializeDocumentPhysicalTableMap(v.PhysicalTableMap, ok); err != nil {
@@ -36464,6 +36478,20 @@ func awsRestjson1_serializeDocumentPercentVisibleRange(v *types.PercentVisibleRa
return nil
}
+func awsRestjson1_serializeDocumentPerformanceConfiguration(v *types.PerformanceConfiguration, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.UniqueKeys != nil {
+ ok := object.Key("UniqueKeys")
+ if err := awsRestjson1_serializeDocumentUniqueKeyList(v.UniqueKeys, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
func awsRestjson1_serializeDocumentPeriodOverPeriodComputation(v *types.PeriodOverPeriodComputation, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
@@ -44835,6 +44863,44 @@ func awsRestjson1_serializeDocumentUnaggregatedFieldList(v []types.UnaggregatedF
return nil
}
+func awsRestjson1_serializeDocumentUniqueKey(v *types.UniqueKey, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.ColumnNames != nil {
+ ok := object.Key("ColumnNames")
+ if err := awsRestjson1_serializeDocumentUniqueKeyColumnNameList(v.ColumnNames, ok); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func awsRestjson1_serializeDocumentUniqueKeyColumnNameList(v []string, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ av.String(v[i])
+ }
+ return nil
+}
+
+func awsRestjson1_serializeDocumentUniqueKeyList(v []types.UniqueKey, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ if err := awsRestjson1_serializeDocumentUniqueKey(&v[i], av); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
func awsRestjson1_serializeDocumentUniqueValuesComputation(v *types.UniqueValuesComputation, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
diff --git a/service/quicksight/types/types.go b/service/quicksight/types/types.go
index eb85b5dddca..4c8c777cef5 100644
--- a/service/quicksight/types/types.go
+++ b/service/quicksight/types/types.go
@@ -4597,6 +4597,9 @@ type DataSet struct {
// templates, analyses, and dashboards.
OutputColumns []OutputColumn
+ // The performance optimization configuration of a dataset.
+ PerformanceConfiguration *PerformanceConfiguration
+
// Declares the physical tables that are available in the underlying data sources.
PhysicalTableMap map[string]PhysicalTable
@@ -11327,6 +11330,16 @@ type PercentVisibleRange struct {
noSmithyDocumentSerde
}
+// The configuration for the performance optimization of the dataset that contains
+// a UniqueKey configuration.
+type PerformanceConfiguration struct {
+
+ // A UniqueKey configuration.
+ UniqueKeys []UniqueKey
+
+ noSmithyDocumentSerde
+}
+
// The period over period computation configuration.
type PeriodOverPeriodComputation struct {
@@ -17286,6 +17299,17 @@ type UnaggregatedField struct {
noSmithyDocumentSerde
}
+// A UniqueKey configuration that references a dataset column.
+type UniqueKey struct {
+
+ // The name of the column that is referenced in the UniqueKey configuration.
+ //
+ // This member is required.
+ ColumnNames []string
+
+ noSmithyDocumentSerde
+}
+
// The unique values computation configuration.
type UniqueValuesComputation struct {
diff --git a/service/quicksight/validators.go b/service/quicksight/validators.go
index f37dbff2f39..67b465f0e37 100644
--- a/service/quicksight/validators.go
+++ b/service/quicksight/validators.go
@@ -15035,6 +15035,23 @@ func validatePercentageDisplayFormatConfiguration(v *types.PercentageDisplayForm
}
}
+func validatePerformanceConfiguration(v *types.PerformanceConfiguration) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "PerformanceConfiguration"}
+ if v.UniqueKeys != nil {
+ if err := validateUniqueKeyList(v.UniqueKeys); err != nil {
+ invalidParams.AddNested("UniqueKeys", err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validatePeriodOverPeriodComputation(v *types.PeriodOverPeriodComputation) error {
if v == nil {
return nil
@@ -20039,6 +20056,38 @@ func validateUnaggregatedFieldList(v []types.UnaggregatedField) error {
}
}
+func validateUniqueKey(v *types.UniqueKey) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "UniqueKey"}
+ if v.ColumnNames == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ColumnNames"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
+func validateUniqueKeyList(v []types.UniqueKey) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "UniqueKeyList"}
+ for i := range v {
+ if err := validateUniqueKey(&v[i]); err != nil {
+ invalidParams.AddNested(fmt.Sprintf("[%d]", i), err.(smithy.InvalidParamsError))
+ }
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateUniqueValuesComputation(v *types.UniqueValuesComputation) error {
if v == nil {
return nil
@@ -21057,6 +21106,11 @@ func validateOpCreateDataSetInput(v *CreateDataSetInput) error {
invalidParams.AddNested("DatasetParameters", err.(smithy.InvalidParamsError))
}
}
+ if v.PerformanceConfiguration != nil {
+ if err := validatePerformanceConfiguration(v.PerformanceConfiguration); err != nil {
+ invalidParams.AddNested("PerformanceConfiguration", err.(smithy.InvalidParamsError))
+ }
+ }
if invalidParams.Len() > 0 {
return invalidParams
} else {
@@ -24561,6 +24615,11 @@ func validateOpUpdateDataSetInput(v *UpdateDataSetInput) error {
invalidParams.AddNested("DatasetParameters", err.(smithy.InvalidParamsError))
}
}
+ if v.PerformanceConfiguration != nil {
+ if err := validatePerformanceConfiguration(v.PerformanceConfiguration); err != nil {
+ invalidParams.AddNested("PerformanceConfiguration", err.(smithy.InvalidParamsError))
+ }
+ }
if invalidParams.Len() > 0 {
return invalidParams
} else {
diff --git a/service/rds/CHANGELOG.md b/service/rds/CHANGELOG.md
index fb3a04d5dda..6c78021c9ae 100644
--- a/service/rds/CHANGELOG.md
+++ b/service/rds/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.93.0 (2024-12-16)
+
+* **Feature**: This release adds support for the "MYSQL_CACHING_SHA2_PASSWORD" enum value for RDS Proxy ClientPasswordAuthType.
+
# v1.92.0 (2024-12-02)
* **Feature**: Amazon RDS supports CloudWatch Database Insights. You can use the SDK to create, modify, and describe the DatabaseInsightsMode for your DB instances and clusters.
diff --git a/service/rds/go_module_metadata.go b/service/rds/go_module_metadata.go
index 7bb0f87ab18..206f31ce5fa 100644
--- a/service/rds/go_module_metadata.go
+++ b/service/rds/go_module_metadata.go
@@ -3,4 +3,4 @@
package rds
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.92.0"
+const goModuleVersion = "1.93.0"
diff --git a/service/rds/types/enums.go b/service/rds/types/enums.go
index 27b68b9f40a..8391cd72a01 100644
--- a/service/rds/types/enums.go
+++ b/service/rds/types/enums.go
@@ -145,10 +145,11 @@ type ClientPasswordAuthType string
// Enum values for ClientPasswordAuthType
const (
- ClientPasswordAuthTypeMysqlNativePassword ClientPasswordAuthType = "MYSQL_NATIVE_PASSWORD"
- ClientPasswordAuthTypePostgresScramSha256 ClientPasswordAuthType = "POSTGRES_SCRAM_SHA_256"
- ClientPasswordAuthTypePostgresMd5 ClientPasswordAuthType = "POSTGRES_MD5"
- ClientPasswordAuthTypeSqlServerAuthentication ClientPasswordAuthType = "SQL_SERVER_AUTHENTICATION"
+ ClientPasswordAuthTypeMysqlNativePassword ClientPasswordAuthType = "MYSQL_NATIVE_PASSWORD"
+ ClientPasswordAuthTypeMysqlCachingSha2Password ClientPasswordAuthType = "MYSQL_CACHING_SHA2_PASSWORD"
+ ClientPasswordAuthTypePostgresScramSha256 ClientPasswordAuthType = "POSTGRES_SCRAM_SHA_256"
+ ClientPasswordAuthTypePostgresMd5 ClientPasswordAuthType = "POSTGRES_MD5"
+ ClientPasswordAuthTypeSqlServerAuthentication ClientPasswordAuthType = "SQL_SERVER_AUTHENTICATION"
)
// Values returns all known values for ClientPasswordAuthType. Note that this can
@@ -158,6 +159,7 @@ const (
func (ClientPasswordAuthType) Values() []ClientPasswordAuthType {
return []ClientPasswordAuthType{
"MYSQL_NATIVE_PASSWORD",
+ "MYSQL_CACHING_SHA2_PASSWORD",
"POSTGRES_SCRAM_SHA_256",
"POSTGRES_MD5",
"SQL_SERVER_AUTHENTICATION",
diff --git a/service/resiliencehub/CHANGELOG.md b/service/resiliencehub/CHANGELOG.md
index 416e452bdc8..aafe58842cf 100644
--- a/service/resiliencehub/CHANGELOG.md
+++ b/service/resiliencehub/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.29.0 (2024-12-18)
+
+* **Feature**: AWS Resilience Hub now automatically detects already configured CloudWatch alarms and FIS experiments as part of the assessment process and returns the discovered resources in the corresponding list API responses. It also allows you to include or exclude test recommendations for an AppComponent.
+
# v1.28.1 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/resiliencehub/deserializers.go b/service/resiliencehub/deserializers.go
index 3917d92ed2b..c2bab0685aa 100644
--- a/service/resiliencehub/deserializers.go
+++ b/service/resiliencehub/deserializers.go
@@ -11520,6 +11520,55 @@ func awsRestjson1_deserializeDocumentAdditionalInfoValueList(v *[]string, value
return nil
}
+func awsRestjson1_deserializeDocumentAlarm(v **types.Alarm, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.Alarm
+ if *v == nil {
+ sv = &types.Alarm{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "alarmArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected Arn to be of type string, got %T instead", value)
+ }
+ sv.AlarmArn = ptr.String(jtv)
+ }
+
+ case "source":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String255 to be of type string, got %T instead", value)
+ }
+ sv.Source = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsRestjson1_deserializeDocumentAlarmRecommendation(v **types.AlarmRecommendation, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -13373,6 +13422,15 @@ func awsRestjson1_deserializeDocumentBatchUpdateRecommendationStatusSuccessfulEn
for key, value := range shape {
switch key {
+ case "appComponentId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected EntityName255 to be of type string, got %T instead", value)
+ }
+ sv.AppComponentId = ptr.String(jtv)
+ }
+
case "entryId":
if value != nil {
jtv, ok := value.(string)
@@ -14535,6 +14593,55 @@ func awsRestjson1_deserializeDocumentEventSubscriptionList(v *[]types.EventSubsc
return nil
}
+func awsRestjson1_deserializeDocumentExperiment(v **types.Experiment, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.Experiment
+ if *v == nil {
+ sv = &types.Experiment{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "experimentArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String255 to be of type string, got %T instead", value)
+ }
+ sv.ExperimentArn = ptr.String(jtv)
+ }
+
+ case "experimentTemplateId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected String255 to be of type string, got %T instead", value)
+ }
+ sv.ExperimentTemplateId = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsRestjson1_deserializeDocumentFailedGroupingRecommendationEntries(v *[]types.FailedGroupingRecommendationEntry, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -15586,6 +15693,11 @@ func awsRestjson1_deserializeDocumentRecommendationItem(v **types.Recommendation
sv.AlreadyImplemented = ptr.Bool(jtv)
}
+ case "discoveredAlarm":
+ if err := awsRestjson1_deserializeDocumentAlarm(&sv.DiscoveredAlarm, value); err != nil {
+ return err
+ }
+
case "excluded":
if value != nil {
jtv, ok := value.(bool)
@@ -15604,6 +15716,11 @@ func awsRestjson1_deserializeDocumentRecommendationItem(v **types.Recommendation
sv.ExcludeReason = types.ExcludeRecommendationReason(jtv)
}
+ case "latestDiscoveredExperiment":
+ if err := awsRestjson1_deserializeDocumentExperiment(&sv.LatestDiscoveredExperiment, value); err != nil {
+ return err
+ }
+
case "resourceId":
if value != nil {
jtv, ok := value.(string)
@@ -17261,6 +17378,15 @@ func awsRestjson1_deserializeDocumentTestRecommendation(v **types.TestRecommenda
for key, value := range shape {
switch key {
+ case "appComponentId":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected EntityName255 to be of type string, got %T instead", value)
+ }
+ sv.AppComponentId = ptr.String(jtv)
+ }
+
case "appComponentName":
if value != nil {
jtv, ok := value.(string)
diff --git a/service/resiliencehub/go_module_metadata.go b/service/resiliencehub/go_module_metadata.go
index 7b2ae0bba55..a992ebd4891 100644
--- a/service/resiliencehub/go_module_metadata.go
+++ b/service/resiliencehub/go_module_metadata.go
@@ -3,4 +3,4 @@
package resiliencehub
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.28.1"
+const goModuleVersion = "1.29.0"
diff --git a/service/resiliencehub/serializers.go b/service/resiliencehub/serializers.go
index 07e2fd5d956..901b2678b50 100644
--- a/service/resiliencehub/serializers.go
+++ b/service/resiliencehub/serializers.go
@@ -6576,6 +6576,11 @@ func awsRestjson1_serializeDocumentUpdateRecommendationStatusRequestEntry(v *typ
object := value.Object()
defer object.Close()
+ if v.AppComponentId != nil {
+ ok := object.Key("appComponentId")
+ ok.String(*v.AppComponentId)
+ }
+
if v.EntryId != nil {
ok := object.Key("entryId")
ok.String(*v.EntryId)
diff --git a/service/resiliencehub/types/types.go b/service/resiliencehub/types/types.go
index 5ce2ff4f3ae..02957580222 100644
--- a/service/resiliencehub/types/types.go
+++ b/service/resiliencehub/types/types.go
@@ -19,6 +19,20 @@ type AcceptGroupingRecommendationEntry struct {
noSmithyDocumentSerde
}
+// Indicates the Amazon CloudWatch alarm detected while running an assessment.
+type Alarm struct {
+
+ // Amazon Resource Name (ARN) of the Amazon CloudWatch alarm.
+ AlarmArn *string
+
+ // Indicates the source of the Amazon CloudWatch alarm. That is, it indicates if
+ // the alarm was created using Resilience Hub recommendation ( AwsResilienceHub ),
+ // or if you had created the alarm in Amazon CloudWatch ( Customer ).
+ Source *string
+
+ noSmithyDocumentSerde
+}
+
// Defines a recommendation for a CloudWatch alarm.
type AlarmRecommendation struct {
@@ -483,7 +497,7 @@ type AppVersionSummary struct {
type AssessmentRiskRecommendation struct {
// Indicates the Application Components (AppComponents) that were assessed as part
- // of the assessnent and are associated with the identified risk and
+ // of the assessment and are associated with the identified risk and
// recommendation.
//
// This property is available only in the US East (N. Virginia) Region.
@@ -564,6 +578,9 @@ type BatchUpdateRecommendationStatusSuccessfulEntry struct {
// This member is required.
ReferenceId *string
+ // Indicates the identifier of an AppComponent.
+ AppComponentId *string
+
// Indicates the reason for excluding an operational recommendation.
ExcludeReason ExcludeRecommendationReason
@@ -847,6 +864,18 @@ type EventSubscription struct {
noSmithyDocumentSerde
}
+// Indicates the FIS experiment detected while running an assessment.
+type Experiment struct {
+
+ // Amazon Resource Name (ARN) of the FIS experiment.
+ ExperimentArn *string
+
+ // Identifier of the FIS experiment template.
+ ExperimentTemplateId *string
+
+ noSmithyDocumentSerde
+}
+
// Indicates the accepted grouping recommendation whose implementation failed.
type FailedGroupingRecommendationEntry struct {
@@ -1051,6 +1080,11 @@ type PermissionModel struct {
// account that will be assumed by Resilience Hub Service Principle to obtain a
// read-only access to your application resources while running an assessment.
//
+ // If your IAM role includes a path, you must include the path in the
+ // invokerRoleName parameter. For example, if your IAM role's ARN is
+ // arn:aws:iam:123456789012:role/my-path/role-name , you should pass
+ // my-path/role-name .
+ //
// - You must have iam:passRole permission for this role while creating or
// updating the application.
//
@@ -1220,12 +1254,20 @@ type RecommendationItem struct {
// Specifies if the recommendation has already been implemented.
AlreadyImplemented *bool
+ // Indicates the previously implemented Amazon CloudWatch alarm discovered by
+ // Resilience Hub.
+ DiscoveredAlarm *Alarm
+
// Indicates the reason for excluding an operational recommendation.
ExcludeReason ExcludeRecommendationReason
// Indicates if an operational recommendation item is excluded.
Excluded *bool
+ // Indicates the experiment created in FIS that was discovered by Resilience Hub,
+ // which matches the recommendation.
+ LatestDiscoveredExperiment *Experiment
+
// Identifier of the resource.
ResourceId *string
@@ -1635,6 +1677,9 @@ type TestRecommendation struct {
// This member is required.
ReferenceId *string
+ // Indicates the identifier of the AppComponent.
+ AppComponentId *string
+
// Name of the Application Component.
AppComponentName *string
@@ -1732,6 +1777,9 @@ type UpdateRecommendationStatusRequestEntry struct {
// This member is required.
ReferenceId *string
+ // Indicates the identifier of the AppComponent.
+ AppComponentId *string
+
// Indicates the reason for excluding an operational recommendation.
ExcludeReason ExcludeRecommendationReason
diff --git a/service/route53domains/CHANGELOG.md b/service/route53domains/CHANGELOG.md
index c7c8cb72c00..48a21c9982e 100644
--- a/service/route53domains/CHANGELOG.md
+++ b/service/route53domains/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.28.0 (2024-12-12)
+
+* **Feature**: This release includes the following API updates: added the enumeration type RESTORE_DOMAIN to the OperationType; constrained the Price attribute to non-negative values; updated the LangCode to allow 2 or 3 alphabetical characters.
+
# v1.27.7 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/route53domains/go_module_metadata.go b/service/route53domains/go_module_metadata.go
index 681327c65cd..f725a91862f 100644
--- a/service/route53domains/go_module_metadata.go
+++ b/service/route53domains/go_module_metadata.go
@@ -3,4 +3,4 @@
package route53domains
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.27.7"
+const goModuleVersion = "1.28.0"
diff --git a/service/route53domains/types/enums.go b/service/route53domains/types/enums.go
index fae9fb69355..be9619e9993 100644
--- a/service/route53domains/types/enums.go
+++ b/service/route53domains/types/enums.go
@@ -742,6 +742,7 @@ const (
OperationTypeInternalTransferInDomain OperationType = "INTERNAL_TRANSFER_IN_DOMAIN"
OperationTypeReleaseToGandi OperationType = "RELEASE_TO_GANDI"
OperationTypeTransferOnRenew OperationType = "TRANSFER_ON_RENEW"
+ OperationTypeRestoreDomain OperationType = "RESTORE_DOMAIN"
)
// Values returns all known values for OperationType. Note that this can be
@@ -770,6 +771,7 @@ func (OperationType) Values() []OperationType {
"INTERNAL_TRANSFER_IN_DOMAIN",
"RELEASE_TO_GANDI",
"TRANSFER_ON_RENEW",
+ "RESTORE_DOMAIN",
}
}
diff --git a/service/scheduler/CHANGELOG.md b/service/scheduler/CHANGELOG.md
index 8d738ff2e6a..8d5de165400 100644
--- a/service/scheduler/CHANGELOG.md
+++ b/service/scheduler/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.12.8 (2024-12-13)
+
+* No change notes available for this release.
+
# v1.12.7 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/scheduler/go_module_metadata.go b/service/scheduler/go_module_metadata.go
index 9d9cd6d942a..03f1f64dfc2 100644
--- a/service/scheduler/go_module_metadata.go
+++ b/service/scheduler/go_module_metadata.go
@@ -3,4 +3,4 @@
package scheduler
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.12.7"
+const goModuleVersion = "1.12.8"
diff --git a/service/scheduler/internal/endpoints/endpoints.go b/service/scheduler/internal/endpoints/endpoints.go
index e238301fe55..99a2dbf2423 100644
--- a/service/scheduler/internal/endpoints/endpoints.go
+++ b/service/scheduler/internal/endpoints/endpoints.go
@@ -277,6 +277,14 @@ var defaultPartitions = endpoints.Partitions{
},
RegionRegex: partitionRegexp.AwsIso,
IsRegionalized: true,
+ Endpoints: endpoints.Endpoints{
+ endpoints.EndpointKey{
+ Region: "us-iso-east-1",
+ }: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "us-iso-west-1",
+ }: endpoints.Endpoint{},
+ },
},
{
ID: "aws-iso-b",
@@ -298,6 +306,11 @@ var defaultPartitions = endpoints.Partitions{
},
RegionRegex: partitionRegexp.AwsIsoB,
IsRegionalized: true,
+ Endpoints: endpoints.Endpoints{
+ endpoints.EndpointKey{
+ Region: "us-isob-east-1",
+ }: endpoints.Endpoint{},
+ },
},
{
ID: "aws-iso-e",
diff --git a/service/servicediscovery/CHANGELOG.md b/service/servicediscovery/CHANGELOG.md
index 16db72a9385..a4ed8cd5e23 100644
--- a/service/servicediscovery/CHANGELOG.md
+++ b/service/servicediscovery/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.34.0 (2024-12-13)
+
+* **Feature**: AWS Cloud Map now supports service-level attributes, allowing you to associate custom metadata directly with services. These attributes can be retrieved, updated, and deleted using the new GetServiceAttributes, UpdateServiceAttributes, and DeleteServiceAttributes API calls.
+
# v1.33.7 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/servicediscovery/api_op_DeleteService.go b/service/servicediscovery/api_op_DeleteService.go
index e8bd1a52215..30d2c1f4c8e 100644
--- a/service/servicediscovery/api_op_DeleteService.go
+++ b/service/servicediscovery/api_op_DeleteService.go
@@ -10,8 +10,8 @@ import (
smithyhttp "github.com/aws/smithy-go/transport/http"
)
-// Deletes a specified service. If the service still contains one or more
-// registered instances, the request fails.
+// Deletes a specified service and all associated service attributes. If the
+// service still contains one or more registered instances, the request fails.
func (c *Client) DeleteService(ctx context.Context, params *DeleteServiceInput, optFns ...func(*Options)) (*DeleteServiceOutput, error) {
if params == nil {
params = &DeleteServiceInput{}
diff --git a/service/servicediscovery/api_op_DeleteServiceAttributes.go b/service/servicediscovery/api_op_DeleteServiceAttributes.go
new file mode 100644
index 00000000000..fef057b8b41
--- /dev/null
+++ b/service/servicediscovery/api_op_DeleteServiceAttributes.go
@@ -0,0 +1,157 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package servicediscovery
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Deletes specific attributes associated with a service.
+func (c *Client) DeleteServiceAttributes(ctx context.Context, params *DeleteServiceAttributesInput, optFns ...func(*Options)) (*DeleteServiceAttributesOutput, error) {
+ if params == nil {
+ params = &DeleteServiceAttributesInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "DeleteServiceAttributes", params, optFns, c.addOperationDeleteServiceAttributesMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*DeleteServiceAttributesOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type DeleteServiceAttributesInput struct {
+
+ // A list of keys corresponding to each attribute that you want to delete.
+ //
+ // This member is required.
+ Attributes []string
+
+ // The ID of the service from which the attributes will be deleted.
+ //
+ // This member is required.
+ ServiceId *string
+
+ noSmithyDocumentSerde
+}
+
+type DeleteServiceAttributesOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationDeleteServiceAttributesMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsAwsjson11_serializeOpDeleteServiceAttributes{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsAwsjson11_deserializeOpDeleteServiceAttributes{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "DeleteServiceAttributes"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpDeleteServiceAttributesValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opDeleteServiceAttributes(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opDeleteServiceAttributes(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "DeleteServiceAttributes",
+ }
+}
diff --git a/service/servicediscovery/api_op_GetServiceAttributes.go b/service/servicediscovery/api_op_GetServiceAttributes.go
new file mode 100644
index 00000000000..dc3f092829a
--- /dev/null
+++ b/service/servicediscovery/api_op_GetServiceAttributes.go
@@ -0,0 +1,158 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package servicediscovery
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/aws-sdk-go-v2/service/servicediscovery/types"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Returns the attributes associated with a specified service.
+func (c *Client) GetServiceAttributes(ctx context.Context, params *GetServiceAttributesInput, optFns ...func(*Options)) (*GetServiceAttributesOutput, error) {
+ if params == nil {
+ params = &GetServiceAttributesInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "GetServiceAttributes", params, optFns, c.addOperationGetServiceAttributesMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*GetServiceAttributesOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type GetServiceAttributesInput struct {
+
+ // The ID of the service that you want to get attributes for.
+ //
+ // This member is required.
+ ServiceId *string
+
+ noSmithyDocumentSerde
+}
+
+type GetServiceAttributesOutput struct {
+
+ // A complex type that contains the service ARN and a list of attribute key-value
+ // pairs associated with the service.
+ ServiceAttributes *types.ServiceAttributes
+
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationGetServiceAttributesMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsAwsjson11_serializeOpGetServiceAttributes{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsAwsjson11_deserializeOpGetServiceAttributes{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "GetServiceAttributes"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpGetServiceAttributesValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opGetServiceAttributes(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opGetServiceAttributes(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "GetServiceAttributes",
+ }
+}
diff --git a/service/servicediscovery/api_op_UpdateService.go b/service/servicediscovery/api_op_UpdateService.go
index ef35b280efa..c439cfe59c4 100644
--- a/service/servicediscovery/api_op_UpdateService.go
+++ b/service/servicediscovery/api_op_UpdateService.go
@@ -52,7 +52,8 @@ type UpdateServiceInput struct {
// This member is required.
Id *string
- // A complex type that contains the new settings for the service.
+ // A complex type that contains the new settings for the service. You can specify
+ // a maximum of 30 attributes (key-value pairs).
//
// This member is required.
Service *types.ServiceChange
diff --git a/service/servicediscovery/api_op_UpdateServiceAttributes.go b/service/servicediscovery/api_op_UpdateServiceAttributes.go
new file mode 100644
index 00000000000..20603c622ab
--- /dev/null
+++ b/service/servicediscovery/api_op_UpdateServiceAttributes.go
@@ -0,0 +1,157 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package servicediscovery
+
+import (
+ "context"
+ "fmt"
+ awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+ "github.com/aws/smithy-go/middleware"
+ smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Submits a request to update a specified service to add service-level attributes.
+func (c *Client) UpdateServiceAttributes(ctx context.Context, params *UpdateServiceAttributesInput, optFns ...func(*Options)) (*UpdateServiceAttributesOutput, error) {
+ if params == nil {
+ params = &UpdateServiceAttributesInput{}
+ }
+
+ result, metadata, err := c.invokeOperation(ctx, "UpdateServiceAttributes", params, optFns, c.addOperationUpdateServiceAttributesMiddlewares)
+ if err != nil {
+ return nil, err
+ }
+
+ out := result.(*UpdateServiceAttributesOutput)
+ out.ResultMetadata = metadata
+ return out, nil
+}
+
+type UpdateServiceAttributesInput struct {
+
+ // A string map that contains attribute key-value pairs.
+ //
+ // This member is required.
+ Attributes map[string]string
+
+ // The ID of the service that you want to update.
+ //
+ // This member is required.
+ ServiceId *string
+
+ noSmithyDocumentSerde
+}
+
+type UpdateServiceAttributesOutput struct {
+ // Metadata pertaining to the operation's result.
+ ResultMetadata middleware.Metadata
+
+ noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationUpdateServiceAttributesMiddlewares(stack *middleware.Stack, options Options) (err error) {
+ if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+ return err
+ }
+ err = stack.Serialize.Add(&awsAwsjson11_serializeOpUpdateServiceAttributes{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ err = stack.Deserialize.Add(&awsAwsjson11_deserializeOpUpdateServiceAttributes{}, middleware.After)
+ if err != nil {
+ return err
+ }
+ if err := addProtocolFinalizerMiddlewares(stack, options, "UpdateServiceAttributes"); err != nil {
+ return fmt.Errorf("add protocol finalizers: %v", err)
+ }
+
+ if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+ return err
+ }
+ if err = addSetLoggerMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addClientRequestID(stack); err != nil {
+ return err
+ }
+ if err = addComputeContentLength(stack); err != nil {
+ return err
+ }
+ if err = addResolveEndpointMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addComputePayloadSHA256(stack); err != nil {
+ return err
+ }
+ if err = addRetry(stack, options); err != nil {
+ return err
+ }
+ if err = addRawResponseToMetadata(stack); err != nil {
+ return err
+ }
+ if err = addRecordResponseTiming(stack); err != nil {
+ return err
+ }
+ if err = addSpanRetryLoop(stack, options); err != nil {
+ return err
+ }
+ if err = addClientUserAgent(stack, options); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addTimeOffsetBuild(stack, c); err != nil {
+ return err
+ }
+ if err = addUserAgentRetryMode(stack, options); err != nil {
+ return err
+ }
+ if err = addOpUpdateServiceAttributesValidationMiddleware(stack); err != nil {
+ return err
+ }
+ if err = stack.Initialize.Add(newServiceMetadataMiddleware_opUpdateServiceAttributes(options.Region), middleware.Before); err != nil {
+ return err
+ }
+ if err = addRecursionDetection(stack); err != nil {
+ return err
+ }
+ if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addResponseErrorMiddleware(stack); err != nil {
+ return err
+ }
+ if err = addRequestResponseLogging(stack, options); err != nil {
+ return err
+ }
+ if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+ return err
+ }
+ if err = addSpanInitializeStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanInitializeEnd(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestStart(stack); err != nil {
+ return err
+ }
+ if err = addSpanBuildRequestEnd(stack); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newServiceMetadataMiddleware_opUpdateServiceAttributes(region string) *awsmiddleware.RegisterServiceMetadata {
+ return &awsmiddleware.RegisterServiceMetadata{
+ Region: region,
+ ServiceID: ServiceID,
+ OperationName: "UpdateServiceAttributes",
+ }
+}
diff --git a/service/servicediscovery/deserializers.go b/service/servicediscovery/deserializers.go
index 4a010e86caa..aec053e2694 100644
--- a/service/servicediscovery/deserializers.go
+++ b/service/servicediscovery/deserializers.go
@@ -759,6 +759,120 @@ func awsAwsjson11_deserializeOpErrorDeleteService(response *smithyhttp.Response,
}
}
+type awsAwsjson11_deserializeOpDeleteServiceAttributes struct {
+}
+
+func (*awsAwsjson11_deserializeOpDeleteServiceAttributes) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsAwsjson11_deserializeOpDeleteServiceAttributes) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsAwsjson11_deserializeOpErrorDeleteServiceAttributes(response, &metadata)
+ }
+ output := &DeleteServiceAttributesOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsAwsjson11_deserializeOpDocumentDeleteServiceAttributesOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ return out, metadata, err
+}
+
+func awsAwsjson11_deserializeOpErrorDeleteServiceAttributes(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ bodyInfo, err := getProtocolErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if typ, ok := resolveProtocolErrorType(headerCode, bodyInfo); ok {
+ errorCode = restjson.SanitizeErrorCode(typ)
+ }
+ if len(bodyInfo.Message) != 0 {
+ errorMessage = bodyInfo.Message
+ }
+ switch {
+ case strings.EqualFold("InvalidInput", errorCode):
+ return awsAwsjson11_deserializeErrorInvalidInput(response, errorBody)
+
+ case strings.EqualFold("ServiceNotFound", errorCode):
+ return awsAwsjson11_deserializeErrorServiceNotFound(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
type awsAwsjson11_deserializeOpDeregisterInstance struct {
}
@@ -1698,6 +1812,120 @@ func awsAwsjson11_deserializeOpErrorGetService(response *smithyhttp.Response, me
}
}
+type awsAwsjson11_deserializeOpGetServiceAttributes struct {
+}
+
+func (*awsAwsjson11_deserializeOpGetServiceAttributes) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsAwsjson11_deserializeOpGetServiceAttributes) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsAwsjson11_deserializeOpErrorGetServiceAttributes(response, &metadata)
+ }
+ output := &GetServiceAttributesOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsAwsjson11_deserializeOpDocumentGetServiceAttributesOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ return out, metadata, err
+}
+
+func awsAwsjson11_deserializeOpErrorGetServiceAttributes(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ bodyInfo, err := getProtocolErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if typ, ok := resolveProtocolErrorType(headerCode, bodyInfo); ok {
+ errorCode = restjson.SanitizeErrorCode(typ)
+ }
+ if len(bodyInfo.Message) != 0 {
+ errorMessage = bodyInfo.Message
+ }
+ switch {
+ case strings.EqualFold("InvalidInput", errorCode):
+ return awsAwsjson11_deserializeErrorInvalidInput(response, errorBody)
+
+ case strings.EqualFold("ServiceNotFound", errorCode):
+ return awsAwsjson11_deserializeErrorServiceNotFound(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
type awsAwsjson11_deserializeOpListInstances struct {
}
@@ -2976,9 +3204,129 @@ func (m *awsAwsjson11_deserializeOpUpdatePublicDnsNamespace) HandleDeserialize(c
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsAwsjson11_deserializeOpErrorUpdatePublicDnsNamespace(response, &metadata)
+ return out, metadata, awsAwsjson11_deserializeOpErrorUpdatePublicDnsNamespace(response, &metadata)
+ }
+ output := &UpdatePublicDnsNamespaceOutput{}
+ out.Result = output
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(response.Body, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ err = awsAwsjson11_deserializeOpDocumentUpdatePublicDnsNamespaceOutput(&output, shape)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return out, metadata, err
+ }
+
+ return out, metadata, err
+}
+
+func awsAwsjson11_deserializeOpErrorUpdatePublicDnsNamespace(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+ var errorBuffer bytes.Buffer
+ if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+ return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+ }
+ errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+ errorCode := "UnknownError"
+ errorMessage := errorCode
+
+ headerCode := response.Header.Get("X-Amzn-ErrorType")
+
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ bodyInfo, err := getProtocolErrorInfo(decoder)
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ if typ, ok := resolveProtocolErrorType(headerCode, bodyInfo); ok {
+ errorCode = restjson.SanitizeErrorCode(typ)
+ }
+ if len(bodyInfo.Message) != 0 {
+ errorMessage = bodyInfo.Message
+ }
+ switch {
+ case strings.EqualFold("DuplicateRequest", errorCode):
+ return awsAwsjson11_deserializeErrorDuplicateRequest(response, errorBody)
+
+ case strings.EqualFold("InvalidInput", errorCode):
+ return awsAwsjson11_deserializeErrorInvalidInput(response, errorBody)
+
+ case strings.EqualFold("NamespaceNotFound", errorCode):
+ return awsAwsjson11_deserializeErrorNamespaceNotFound(response, errorBody)
+
+ case strings.EqualFold("ResourceInUse", errorCode):
+ return awsAwsjson11_deserializeErrorResourceInUse(response, errorBody)
+
+ default:
+ genericError := &smithy.GenericAPIError{
+ Code: errorCode,
+ Message: errorMessage,
+ }
+ return genericError
+
+ }
+}
+
+type awsAwsjson11_deserializeOpUpdateService struct {
+}
+
+func (*awsAwsjson11_deserializeOpUpdateService) ID() string {
+ return "OperationDeserializer"
+}
+
+func (m *awsAwsjson11_deserializeOpUpdateService) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+ out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+ out, metadata, err = next.HandleDeserialize(ctx, in)
+ if err != nil {
+ return out, metadata, err
+ }
+
+ _, span := tracing.StartSpan(ctx, "OperationDeserializer")
+ endTimer := startMetricTimer(ctx, "client.call.deserialization_duration")
+ defer endTimer()
+ defer span.End()
+ response, ok := out.RawResponse.(*smithyhttp.Response)
+ if !ok {
+ return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+ }
+
+ if response.StatusCode < 200 || response.StatusCode >= 300 {
+ return out, metadata, awsAwsjson11_deserializeOpErrorUpdateService(response, &metadata)
}
- output := &UpdatePublicDnsNamespaceOutput{}
+ output := &UpdateServiceOutput{}
out.Result = output
var buff [1024]byte
@@ -2998,7 +3346,7 @@ func (m *awsAwsjson11_deserializeOpUpdatePublicDnsNamespace) HandleDeserialize(c
return out, metadata, err
}
- err = awsAwsjson11_deserializeOpDocumentUpdatePublicDnsNamespaceOutput(&output, shape)
+ err = awsAwsjson11_deserializeOpDocumentUpdateServiceOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -3012,7 +3360,7 @@ func (m *awsAwsjson11_deserializeOpUpdatePublicDnsNamespace) HandleDeserialize(c
return out, metadata, err
}
-func awsAwsjson11_deserializeOpErrorUpdatePublicDnsNamespace(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsAwsjson11_deserializeOpErrorUpdateService(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -3055,11 +3403,8 @@ func awsAwsjson11_deserializeOpErrorUpdatePublicDnsNamespace(response *smithyhtt
case strings.EqualFold("InvalidInput", errorCode):
return awsAwsjson11_deserializeErrorInvalidInput(response, errorBody)
- case strings.EqualFold("NamespaceNotFound", errorCode):
- return awsAwsjson11_deserializeErrorNamespaceNotFound(response, errorBody)
-
- case strings.EqualFold("ResourceInUse", errorCode):
- return awsAwsjson11_deserializeErrorResourceInUse(response, errorBody)
+ case strings.EqualFold("ServiceNotFound", errorCode):
+ return awsAwsjson11_deserializeErrorServiceNotFound(response, errorBody)
default:
genericError := &smithy.GenericAPIError{
@@ -3071,14 +3416,14 @@ func awsAwsjson11_deserializeOpErrorUpdatePublicDnsNamespace(response *smithyhtt
}
}
-type awsAwsjson11_deserializeOpUpdateService struct {
+type awsAwsjson11_deserializeOpUpdateServiceAttributes struct {
}
-func (*awsAwsjson11_deserializeOpUpdateService) ID() string {
+func (*awsAwsjson11_deserializeOpUpdateServiceAttributes) ID() string {
return "OperationDeserializer"
}
-func (m *awsAwsjson11_deserializeOpUpdateService) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+func (m *awsAwsjson11_deserializeOpUpdateServiceAttributes) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
@@ -3096,9 +3441,9 @@ func (m *awsAwsjson11_deserializeOpUpdateService) HandleDeserialize(ctx context.
}
if response.StatusCode < 200 || response.StatusCode >= 300 {
- return out, metadata, awsAwsjson11_deserializeOpErrorUpdateService(response, &metadata)
+ return out, metadata, awsAwsjson11_deserializeOpErrorUpdateServiceAttributes(response, &metadata)
}
- output := &UpdateServiceOutput{}
+ output := &UpdateServiceAttributesOutput{}
out.Result = output
var buff [1024]byte
@@ -3118,7 +3463,7 @@ func (m *awsAwsjson11_deserializeOpUpdateService) HandleDeserialize(ctx context.
return out, metadata, err
}
- err = awsAwsjson11_deserializeOpDocumentUpdateServiceOutput(&output, shape)
+ err = awsAwsjson11_deserializeOpDocumentUpdateServiceAttributesOutput(&output, shape)
if err != nil {
var snapshot bytes.Buffer
io.Copy(&snapshot, ringBuffer)
@@ -3132,7 +3477,7 @@ func (m *awsAwsjson11_deserializeOpUpdateService) HandleDeserialize(ctx context.
return out, metadata, err
}
-func awsAwsjson11_deserializeOpErrorUpdateService(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+func awsAwsjson11_deserializeOpErrorUpdateServiceAttributes(response *smithyhttp.Response, metadata *middleware.Metadata) error {
var errorBuffer bytes.Buffer
if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
@@ -3169,12 +3514,12 @@ func awsAwsjson11_deserializeOpErrorUpdateService(response *smithyhttp.Response,
errorMessage = bodyInfo.Message
}
switch {
- case strings.EqualFold("DuplicateRequest", errorCode):
- return awsAwsjson11_deserializeErrorDuplicateRequest(response, errorBody)
-
case strings.EqualFold("InvalidInput", errorCode):
return awsAwsjson11_deserializeErrorInvalidInput(response, errorBody)
+ case strings.EqualFold("ServiceAttributesLimitExceededException", errorCode):
+ return awsAwsjson11_deserializeErrorServiceAttributesLimitExceededException(response, errorBody)
+
case strings.EqualFold("ServiceNotFound", errorCode):
return awsAwsjson11_deserializeErrorServiceNotFound(response, errorBody)
@@ -3608,6 +3953,41 @@ func awsAwsjson11_deserializeErrorServiceAlreadyExists(response *smithyhttp.Resp
return output
}
+func awsAwsjson11_deserializeErrorServiceAttributesLimitExceededException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+ var buff [1024]byte
+ ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+ body := io.TeeReader(errorBody, ringBuffer)
+ decoder := json.NewDecoder(body)
+ decoder.UseNumber()
+ var shape interface{}
+ if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ output := &types.ServiceAttributesLimitExceededException{}
+ err := awsAwsjson11_deserializeDocumentServiceAttributesLimitExceededException(&output, shape)
+
+ if err != nil {
+ var snapshot bytes.Buffer
+ io.Copy(&snapshot, ringBuffer)
+ err = &smithy.DeserializationError{
+ Err: fmt.Errorf("failed to decode response body, %w", err),
+ Snapshot: snapshot.Bytes(),
+ }
+ return err
+ }
+
+ errorBody.Seek(0, io.SeekStart)
+ return output
+}
+
func awsAwsjson11_deserializeErrorServiceNotFound(response *smithyhttp.Response, errorBody *bytes.Reader) error {
var buff [1024]byte
ringBuffer := smithyio.NewRingBuffer(buff[:])
@@ -5520,6 +5900,127 @@ func awsAwsjson11_deserializeDocumentServiceAlreadyExists(v **types.ServiceAlrea
return nil
}
+func awsAwsjson11_deserializeDocumentServiceAttributes(v **types.ServiceAttributes, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.ServiceAttributes
+ if *v == nil {
+ sv = &types.ServiceAttributes{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "Attributes":
+ if err := awsAwsjson11_deserializeDocumentServiceAttributesMap(&sv.Attributes, value); err != nil {
+ return err
+ }
+
+ case "ServiceArn":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected Arn to be of type string, got %T instead", value)
+ }
+ sv.ServiceArn = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsAwsjson11_deserializeDocumentServiceAttributesLimitExceededException(v **types.ServiceAttributesLimitExceededException, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *types.ServiceAttributesLimitExceededException
+ if *v == nil {
+ sv = &types.ServiceAttributesLimitExceededException{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "message", "Message":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ErrorMessage to be of type string, got %T instead", value)
+ }
+ sv.Message = ptr.String(jtv)
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
+func awsAwsjson11_deserializeDocumentServiceAttributesMap(v *map[string]string, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var mv map[string]string
+ if *v == nil {
+ mv = map[string]string{}
+ } else {
+ mv = *v
+ }
+
+ for key, value := range shape {
+ var parsedVal string
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected ServiceAttributeValue to be of type string, got %T instead", value)
+ }
+ parsedVal = jtv
+ }
+ mv[key] = parsedVal
+
+ }
+ *v = mv
+ return nil
+}
+
func awsAwsjson11_deserializeDocumentServiceNotFound(v **types.ServiceNotFound, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -6086,6 +6587,37 @@ func awsAwsjson11_deserializeOpDocumentDeleteNamespaceOutput(v **DeleteNamespace
return nil
}
+func awsAwsjson11_deserializeOpDocumentDeleteServiceAttributesOutput(v **DeleteServiceAttributesOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *DeleteServiceAttributesOutput
+ if *v == nil {
+ sv = &DeleteServiceAttributesOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsAwsjson11_deserializeOpDocumentDeleteServiceOutput(v **DeleteServiceOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -6403,6 +6935,42 @@ func awsAwsjson11_deserializeOpDocumentGetOperationOutput(v **GetOperationOutput
return nil
}
+func awsAwsjson11_deserializeOpDocumentGetServiceAttributesOutput(v **GetServiceAttributesOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *GetServiceAttributesOutput
+ if *v == nil {
+ sv = &GetServiceAttributesOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ case "ServiceAttributes":
+ if err := awsAwsjson11_deserializeDocumentServiceAttributes(&sv.ServiceAttributes, value); err != nil {
+ return err
+ }
+
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsAwsjson11_deserializeOpDocumentGetServiceOutput(v **GetServiceOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
@@ -6877,6 +7445,37 @@ func awsAwsjson11_deserializeOpDocumentUpdatePublicDnsNamespaceOutput(v **Update
return nil
}
+func awsAwsjson11_deserializeOpDocumentUpdateServiceAttributesOutput(v **UpdateServiceAttributesOutput, value interface{}) error {
+ if v == nil {
+ return fmt.Errorf("unexpected nil of type %T", v)
+ }
+ if value == nil {
+ return nil
+ }
+
+ shape, ok := value.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("unexpected JSON type %v", value)
+ }
+
+ var sv *UpdateServiceAttributesOutput
+ if *v == nil {
+ sv = &UpdateServiceAttributesOutput{}
+ } else {
+ sv = *v
+ }
+
+ for key, value := range shape {
+ switch key {
+ default:
+ _, _ = key, value
+
+ }
+ }
+ *v = sv
+ return nil
+}
+
func awsAwsjson11_deserializeOpDocumentUpdateServiceOutput(v **UpdateServiceOutput, value interface{}) error {
if v == nil {
return fmt.Errorf("unexpected nil of type %T", v)
diff --git a/service/servicediscovery/generated.json b/service/servicediscovery/generated.json
index c9892de7d45..2c74b69476a 100644
--- a/service/servicediscovery/generated.json
+++ b/service/servicediscovery/generated.json
@@ -14,6 +14,7 @@
"api_op_CreateService.go",
"api_op_DeleteNamespace.go",
"api_op_DeleteService.go",
+ "api_op_DeleteServiceAttributes.go",
"api_op_DeregisterInstance.go",
"api_op_DiscoverInstances.go",
"api_op_DiscoverInstancesRevision.go",
@@ -22,6 +23,7 @@
"api_op_GetNamespace.go",
"api_op_GetOperation.go",
"api_op_GetService.go",
+ "api_op_GetServiceAttributes.go",
"api_op_ListInstances.go",
"api_op_ListNamespaces.go",
"api_op_ListOperations.go",
@@ -35,6 +37,7 @@
"api_op_UpdatePrivateDnsNamespace.go",
"api_op_UpdatePublicDnsNamespace.go",
"api_op_UpdateService.go",
+ "api_op_UpdateServiceAttributes.go",
"auth.go",
"deserializers.go",
"doc.go",
diff --git a/service/servicediscovery/go_module_metadata.go b/service/servicediscovery/go_module_metadata.go
index 125e383a275..2e66a5784bd 100644
--- a/service/servicediscovery/go_module_metadata.go
+++ b/service/servicediscovery/go_module_metadata.go
@@ -3,4 +3,4 @@
package servicediscovery
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.33.7"
+const goModuleVersion = "1.34.0"
diff --git a/service/servicediscovery/serializers.go b/service/servicediscovery/serializers.go
index b4b14b97e14..19a92bb5b8f 100644
--- a/service/servicediscovery/serializers.go
+++ b/service/servicediscovery/serializers.go
@@ -382,6 +382,67 @@ func (m *awsAwsjson11_serializeOpDeleteService) HandleSerialize(ctx context.Cont
return next.HandleSerialize(ctx, in)
}
+type awsAwsjson11_serializeOpDeleteServiceAttributes struct {
+}
+
+func (*awsAwsjson11_serializeOpDeleteServiceAttributes) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsAwsjson11_serializeOpDeleteServiceAttributes) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*DeleteServiceAttributesInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ operationPath := "/"
+ if len(request.Request.URL.Path) == 0 {
+ request.Request.URL.Path = operationPath
+ } else {
+ request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+ if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+ request.Request.URL.Path += "/"
+ }
+ }
+ request.Request.Method = "POST"
+ httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ httpBindingEncoder.SetHeader("Content-Type").String("application/x-amz-json-1.1")
+ httpBindingEncoder.SetHeader("X-Amz-Target").String("Route53AutoNaming_v20170314.DeleteServiceAttributes")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsAwsjson11_serializeOpDocumentDeleteServiceAttributesInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+
type awsAwsjson11_serializeOpDeregisterInstance struct {
}
@@ -870,6 +931,67 @@ func (m *awsAwsjson11_serializeOpGetService) HandleSerialize(ctx context.Context
return next.HandleSerialize(ctx, in)
}
+type awsAwsjson11_serializeOpGetServiceAttributes struct {
+}
+
+func (*awsAwsjson11_serializeOpGetServiceAttributes) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsAwsjson11_serializeOpGetServiceAttributes) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*GetServiceAttributesInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ operationPath := "/"
+ if len(request.Request.URL.Path) == 0 {
+ request.Request.URL.Path = operationPath
+ } else {
+ request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+ if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+ request.Request.URL.Path += "/"
+ }
+ }
+ request.Request.Method = "POST"
+ httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ httpBindingEncoder.SetHeader("Content-Type").String("application/x-amz-json-1.1")
+ httpBindingEncoder.SetHeader("X-Amz-Target").String("Route53AutoNaming_v20170314.GetServiceAttributes")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsAwsjson11_serializeOpDocumentGetServiceAttributesInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
+
type awsAwsjson11_serializeOpListInstances struct {
}
@@ -1662,6 +1784,67 @@ func (m *awsAwsjson11_serializeOpUpdateService) HandleSerialize(ctx context.Cont
span.End()
return next.HandleSerialize(ctx, in)
}
+
+type awsAwsjson11_serializeOpUpdateServiceAttributes struct {
+}
+
+func (*awsAwsjson11_serializeOpUpdateServiceAttributes) ID() string {
+ return "OperationSerializer"
+}
+
+func (m *awsAwsjson11_serializeOpUpdateServiceAttributes) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+ out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+ _, span := tracing.StartSpan(ctx, "OperationSerializer")
+ endTimer := startMetricTimer(ctx, "client.call.serialization_duration")
+ defer endTimer()
+ defer span.End()
+ request, ok := in.Request.(*smithyhttp.Request)
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+ }
+
+ input, ok := in.Parameters.(*UpdateServiceAttributesInput)
+ _ = input
+ if !ok {
+ return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+ }
+
+ operationPath := "/"
+ if len(request.Request.URL.Path) == 0 {
+ request.Request.URL.Path = operationPath
+ } else {
+ request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+ if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+ request.Request.URL.Path += "/"
+ }
+ }
+ request.Request.Method = "POST"
+ httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+ if err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ httpBindingEncoder.SetHeader("Content-Type").String("application/x-amz-json-1.1")
+ httpBindingEncoder.SetHeader("X-Amz-Target").String("Route53AutoNaming_v20170314.UpdateServiceAttributes")
+
+ jsonEncoder := smithyjson.NewEncoder()
+ if err := awsAwsjson11_serializeOpDocumentUpdateServiceAttributesInput(input, jsonEncoder.Value); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+
+ if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+ return out, metadata, &smithy.SerializationError{Err: err}
+ }
+ in.Request = request
+
+ endTimer()
+ span.End()
+ return next.HandleSerialize(ctx, in)
+}
func awsAwsjson11_serializeDocumentAttributes(v map[string]string, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
@@ -2033,6 +2216,28 @@ func awsAwsjson11_serializeDocumentPublicDnsPropertiesMutableChange(v *types.Pub
return nil
}
+func awsAwsjson11_serializeDocumentServiceAttributeKeyList(v []string, value smithyjson.Value) error {
+ array := value.Array()
+ defer array.Close()
+
+ for i := range v {
+ av := array.Value()
+ av.String(v[i])
+ }
+ return nil
+}
+
+func awsAwsjson11_serializeDocumentServiceAttributesMap(v map[string]string, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ for key := range v {
+ om := object.Key(key)
+ om.String(v[key])
+ }
+ return nil
+}
+
func awsAwsjson11_serializeDocumentServiceChange(v *types.ServiceChange, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
@@ -2339,6 +2544,25 @@ func awsAwsjson11_serializeOpDocumentDeleteNamespaceInput(v *DeleteNamespaceInpu
return nil
}
+func awsAwsjson11_serializeOpDocumentDeleteServiceAttributesInput(v *DeleteServiceAttributesInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.Attributes != nil {
+ ok := object.Key("Attributes")
+ if err := awsAwsjson11_serializeDocumentServiceAttributeKeyList(v.Attributes, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.ServiceId != nil {
+ ok := object.Key("ServiceId")
+ ok.String(*v.ServiceId)
+ }
+
+ return nil
+}
+
func awsAwsjson11_serializeOpDocumentDeleteServiceInput(v *DeleteServiceInput, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
@@ -2496,6 +2720,18 @@ func awsAwsjson11_serializeOpDocumentGetOperationInput(v *GetOperationInput, val
return nil
}
+func awsAwsjson11_serializeOpDocumentGetServiceAttributesInput(v *GetServiceAttributesInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.ServiceId != nil {
+ ok := object.Key("ServiceId")
+ ok.String(*v.ServiceId)
+ }
+
+ return nil
+}
+
func awsAwsjson11_serializeOpDocumentGetServiceInput(v *GetServiceInput, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
@@ -2775,6 +3011,25 @@ func awsAwsjson11_serializeOpDocumentUpdatePublicDnsNamespaceInput(v *UpdatePubl
return nil
}
+func awsAwsjson11_serializeOpDocumentUpdateServiceAttributesInput(v *UpdateServiceAttributesInput, value smithyjson.Value) error {
+ object := value.Object()
+ defer object.Close()
+
+ if v.Attributes != nil {
+ ok := object.Key("Attributes")
+ if err := awsAwsjson11_serializeDocumentServiceAttributesMap(v.Attributes, ok); err != nil {
+ return err
+ }
+ }
+
+ if v.ServiceId != nil {
+ ok := object.Key("ServiceId")
+ ok.String(*v.ServiceId)
+ }
+
+ return nil
+}
+
func awsAwsjson11_serializeOpDocumentUpdateServiceInput(v *UpdateServiceInput, value smithyjson.Value) error {
object := value.Object()
defer object.Close()
diff --git a/service/servicediscovery/snapshot/api_op_DeleteServiceAttributes.go.snap b/service/servicediscovery/snapshot/api_op_DeleteServiceAttributes.go.snap
new file mode 100644
index 00000000000..73659e866e5
--- /dev/null
+++ b/service/servicediscovery/snapshot/api_op_DeleteServiceAttributes.go.snap
@@ -0,0 +1,41 @@
+DeleteServiceAttributes
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/servicediscovery/snapshot/api_op_GetServiceAttributes.go.snap b/service/servicediscovery/snapshot/api_op_GetServiceAttributes.go.snap
new file mode 100644
index 00000000000..0d21857c836
--- /dev/null
+++ b/service/servicediscovery/snapshot/api_op_GetServiceAttributes.go.snap
@@ -0,0 +1,41 @@
+GetServiceAttributes
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/servicediscovery/snapshot/api_op_UpdateServiceAttributes.go.snap b/service/servicediscovery/snapshot/api_op_UpdateServiceAttributes.go.snap
new file mode 100644
index 00000000000..8777d2f5cab
--- /dev/null
+++ b/service/servicediscovery/snapshot/api_op_UpdateServiceAttributes.go.snap
@@ -0,0 +1,41 @@
+UpdateServiceAttributes
+ Initialize stack step
+ spanInitializeStart
+ RegisterServiceMetadata
+ legacyEndpointContextSetter
+ SetLogger
+ OperationInputValidation
+ spanInitializeEnd
+ Serialize stack step
+ spanBuildRequestStart
+ setOperationInput
+ ResolveEndpoint
+ OperationSerializer
+ Build stack step
+ ClientRequestID
+ ComputeContentLength
+ UserAgent
+ AddTimeOffsetMiddleware
+ RecursionDetection
+ spanBuildRequestEnd
+ Finalize stack step
+ ResolveAuthScheme
+ GetIdentity
+ ResolveEndpointV2
+ disableHTTPS
+ ComputePayloadHash
+ spanRetryLoop
+ Retry
+ RetryMetricsHeader
+ setLegacyContextSigningOptions
+ Signing
+ Deserialize stack step
+ AddRawResponseToMetadata
+ ErrorCloseResponseBody
+ CloseResponseBody
+ ResponseErrorWrapper
+ RequestIDRetriever
+ OperationDeserializer
+ AddTimeOffsetMiddleware
+ RecordResponseTiming
+ RequestResponseLogger
diff --git a/service/servicediscovery/snapshot_test.go b/service/servicediscovery/snapshot_test.go
index 155a3a30e87..252b8145ddb 100644
--- a/service/servicediscovery/snapshot_test.go
+++ b/service/servicediscovery/snapshot_test.go
@@ -134,6 +134,18 @@ func TestCheckSnapshot_DeleteService(t *testing.T) {
}
}
+func TestCheckSnapshot_DeleteServiceAttributes(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.DeleteServiceAttributes(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "DeleteServiceAttributes")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_DeregisterInstance(t *testing.T) {
svc := New(Options{})
_, err := svc.DeregisterInstance(context.Background(), nil, func(o *Options) {
@@ -230,6 +242,18 @@ func TestCheckSnapshot_GetService(t *testing.T) {
}
}
+func TestCheckSnapshot_GetServiceAttributes(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetServiceAttributes(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "GetServiceAttributes")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestCheckSnapshot_ListInstances(t *testing.T) {
svc := New(Options{})
_, err := svc.ListInstances(context.Background(), nil, func(o *Options) {
@@ -385,6 +409,18 @@ func TestCheckSnapshot_UpdateService(t *testing.T) {
t.Fatal(err)
}
}
+
+func TestCheckSnapshot_UpdateServiceAttributes(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateServiceAttributes(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return testSnapshot(stack, "UpdateServiceAttributes")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
func TestUpdateSnapshot_CreateHttpNamespace(t *testing.T) {
svc := New(Options{})
_, err := svc.CreateHttpNamespace(context.Background(), nil, func(o *Options) {
@@ -457,6 +493,18 @@ func TestUpdateSnapshot_DeleteService(t *testing.T) {
}
}
+func TestUpdateSnapshot_DeleteServiceAttributes(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.DeleteServiceAttributes(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "DeleteServiceAttributes")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_DeregisterInstance(t *testing.T) {
svc := New(Options{})
_, err := svc.DeregisterInstance(context.Background(), nil, func(o *Options) {
@@ -553,6 +601,18 @@ func TestUpdateSnapshot_GetService(t *testing.T) {
}
}
+func TestUpdateSnapshot_GetServiceAttributes(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.GetServiceAttributes(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "GetServiceAttributes")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
+
func TestUpdateSnapshot_ListInstances(t *testing.T) {
svc := New(Options{})
_, err := svc.ListInstances(context.Background(), nil, func(o *Options) {
@@ -708,3 +768,15 @@ func TestUpdateSnapshot_UpdateService(t *testing.T) {
t.Fatal(err)
}
}
+
+func TestUpdateSnapshot_UpdateServiceAttributes(t *testing.T) {
+ svc := New(Options{})
+ _, err := svc.UpdateServiceAttributes(context.Background(), nil, func(o *Options) {
+ o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+ return updateSnapshot(stack, "UpdateServiceAttributes")
+ })
+ })
+ if _, ok := err.(snapshotOK); !ok && err != nil {
+ t.Fatal(err)
+ }
+}
diff --git a/service/servicediscovery/types/errors.go b/service/servicediscovery/types/errors.go
index faeb64dd981..cb9d5ac3bea 100644
--- a/service/servicediscovery/types/errors.go
+++ b/service/servicediscovery/types/errors.go
@@ -337,6 +337,35 @@ func (e *ServiceAlreadyExists) ErrorCode() string {
}
func (e *ServiceAlreadyExists) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+// The attribute can't be added to the service because you've exceeded the quota
+// for the number of attributes you can add to a service.
+type ServiceAttributesLimitExceededException struct {
+ Message *string
+
+ ErrorCodeOverride *string
+
+ noSmithyDocumentSerde
+}
+
+func (e *ServiceAttributesLimitExceededException) Error() string {
+ return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *ServiceAttributesLimitExceededException) ErrorMessage() string {
+ if e.Message == nil {
+ return ""
+ }
+ return *e.Message
+}
+func (e *ServiceAttributesLimitExceededException) ErrorCode() string {
+ if e == nil || e.ErrorCodeOverride == nil {
+ return "ServiceAttributesLimitExceededException"
+ }
+ return *e.ErrorCodeOverride
+}
+func (e *ServiceAttributesLimitExceededException) ErrorFault() smithy.ErrorFault {
+ return smithy.FaultClient
+}
+
// No service exists with the specified ID.
type ServiceNotFound struct {
Message *string
diff --git a/service/servicediscovery/types/types.go b/service/servicediscovery/types/types.go
index f0921d6ceee..9a4b5375787 100644
--- a/service/servicediscovery/types/types.go
+++ b/service/servicediscovery/types/types.go
@@ -9,14 +9,15 @@ import (
// A complex type that contains information about the Amazon Route 53 DNS records
// that you want Cloud Map to create when you register an instance.
-//
-// The record types of a service can only be changed by deleting the service and
-// recreating it with a new Dnsconfig .
type DnsConfig struct {
// An array that contains one DnsRecord object for each Route 53 DNS record that
// you want Cloud Map to create when you register an instance.
//
+ // The record type of a service specified in a DnsRecord object can't be updated.
+ // To change a record type, you need to delete the service and recreate it with a
+ // new DnsConfig .
+ //
// This member is required.
DnsRecords []DnsRecord
@@ -1057,6 +1058,26 @@ type Service struct {
noSmithyDocumentSerde
}
+// A complex type that contains information about attributes associated with a
+// specific service.
+type ServiceAttributes struct {
+
+ // A string map that contains the following information for the service that you
+ // specify in ServiceArn :
+ //
+ // - The attributes that apply to the service.
+ //
+ // - For each attribute, the applicable value.
+ //
+ // You can specify a total of 30 attributes.
+ Attributes map[string]string
+
+ // The ARN of the service that the attributes are associated with.
+ ServiceArn *string
+
+ noSmithyDocumentSerde
+}
+
// A complex type that contains changes to an existing service.
type ServiceChange struct {
diff --git a/service/servicediscovery/validators.go b/service/servicediscovery/validators.go
index 6b774a073d9..58a440ee103 100644
--- a/service/servicediscovery/validators.go
+++ b/service/servicediscovery/validators.go
@@ -110,6 +110,26 @@ func (m *validateOpDeleteNamespace) HandleInitialize(ctx context.Context, in mid
return next.HandleInitialize(ctx, in)
}
+type validateOpDeleteServiceAttributes struct {
+}
+
+func (*validateOpDeleteServiceAttributes) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpDeleteServiceAttributes) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*DeleteServiceAttributesInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpDeleteServiceAttributesInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpDeleteService struct {
}
@@ -270,6 +290,26 @@ func (m *validateOpGetOperation) HandleInitialize(ctx context.Context, in middle
return next.HandleInitialize(ctx, in)
}
+type validateOpGetServiceAttributes struct {
+}
+
+func (*validateOpGetServiceAttributes) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpGetServiceAttributes) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*GetServiceAttributesInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpGetServiceAttributesInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpGetService struct {
}
@@ -530,6 +570,26 @@ func (m *validateOpUpdatePublicDnsNamespace) HandleInitialize(ctx context.Contex
return next.HandleInitialize(ctx, in)
}
+type validateOpUpdateServiceAttributes struct {
+}
+
+func (*validateOpUpdateServiceAttributes) ID() string {
+ return "OperationInputValidation"
+}
+
+func (m *validateOpUpdateServiceAttributes) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+ out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+ input, ok := in.Parameters.(*UpdateServiceAttributesInput)
+ if !ok {
+ return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+ }
+ if err := validateOpUpdateServiceAttributesInput(input); err != nil {
+ return out, metadata, err
+ }
+ return next.HandleInitialize(ctx, in)
+}
+
type validateOpUpdateService struct {
}
@@ -570,6 +630,10 @@ func addOpDeleteNamespaceValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpDeleteNamespace{}, middleware.After)
}
+func addOpDeleteServiceAttributesValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpDeleteServiceAttributes{}, middleware.After)
+}
+
func addOpDeleteServiceValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpDeleteService{}, middleware.After)
}
@@ -602,6 +666,10 @@ func addOpGetOperationValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpGetOperation{}, middleware.After)
}
+func addOpGetServiceAttributesValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpGetServiceAttributes{}, middleware.After)
+}
+
func addOpGetServiceValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpGetService{}, middleware.After)
}
@@ -654,6 +722,10 @@ func addOpUpdatePublicDnsNamespaceValidationMiddleware(stack *middleware.Stack)
return stack.Initialize.Add(&validateOpUpdatePublicDnsNamespace{}, middleware.After)
}
+func addOpUpdateServiceAttributesValidationMiddleware(stack *middleware.Stack) error {
+ return stack.Initialize.Add(&validateOpUpdateServiceAttributes{}, middleware.After)
+}
+
func addOpUpdateServiceValidationMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(&validateOpUpdateService{}, middleware.After)
}
@@ -1257,6 +1329,24 @@ func validateOpDeleteNamespaceInput(v *DeleteNamespaceInput) error {
}
}
+func validateOpDeleteServiceAttributesInput(v *DeleteServiceAttributesInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "DeleteServiceAttributesInput"}
+ if v.ServiceId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ServiceId"))
+ }
+ if v.Attributes == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("Attributes"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpDeleteServiceInput(v *DeleteServiceInput) error {
if v == nil {
return nil
@@ -1389,6 +1479,21 @@ func validateOpGetOperationInput(v *GetOperationInput) error {
}
}
+func validateOpGetServiceAttributesInput(v *GetServiceAttributesInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "GetServiceAttributesInput"}
+ if v.ServiceId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ServiceId"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpGetServiceInput(v *GetServiceInput) error {
if v == nil {
return nil
@@ -1633,6 +1738,24 @@ func validateOpUpdatePublicDnsNamespaceInput(v *UpdatePublicDnsNamespaceInput) e
}
}
+func validateOpUpdateServiceAttributesInput(v *UpdateServiceAttributesInput) error {
+ if v == nil {
+ return nil
+ }
+ invalidParams := smithy.InvalidParamsError{Context: "UpdateServiceAttributesInput"}
+ if v.ServiceId == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("ServiceId"))
+ }
+ if v.Attributes == nil {
+ invalidParams.Add(smithy.NewErrParamRequired("Attributes"))
+ }
+ if invalidParams.Len() > 0 {
+ return invalidParams
+ } else {
+ return nil
+ }
+}
+
func validateOpUpdateServiceInput(v *UpdateServiceInput) error {
if v == nil {
return nil
diff --git a/service/synthetics/CHANGELOG.md b/service/synthetics/CHANGELOG.md
index da8279b15b6..1978336b417 100644
--- a/service/synthetics/CHANGELOG.md
+++ b/service/synthetics/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.31.0 (2024-12-17)
+
+* **Feature**: Add support to toggle outbound IPv6 traffic on canaries connected to dualstack subnets. This behavior can be controlled via the new Ipv6AllowedForDualStack parameter of the VpcConfig input object in CreateCanary and UpdateCanary APIs.
+
# v1.30.2 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/synthetics/deserializers.go b/service/synthetics/deserializers.go
index 47023e6f978..d609039aa80 100644
--- a/service/synthetics/deserializers.go
+++ b/service/synthetics/deserializers.go
@@ -5168,6 +5168,15 @@ func awsRestjson1_deserializeDocumentVpcConfigOutput(v **types.VpcConfigOutput,
for key, value := range shape {
switch key {
+ case "Ipv6AllowedForDualStack":
+ if value != nil {
+ jtv, ok := value.(bool)
+ if !ok {
+ return fmt.Errorf("expected NullableBoolean to be of type *bool, got %T instead", value)
+ }
+ sv.Ipv6AllowedForDualStack = ptr.Bool(jtv)
+ }
+
case "SecurityGroupIds":
if err := awsRestjson1_deserializeDocumentSecurityGroupIds(&sv.SecurityGroupIds, value); err != nil {
return err
diff --git a/service/synthetics/go_module_metadata.go b/service/synthetics/go_module_metadata.go
index 267bdf13785..27f2851a19e 100644
--- a/service/synthetics/go_module_metadata.go
+++ b/service/synthetics/go_module_metadata.go
@@ -3,4 +3,4 @@
package synthetics
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.30.2"
+const goModuleVersion = "1.31.0"
diff --git a/service/synthetics/serializers.go b/service/synthetics/serializers.go
index 41154f90f3c..dfbab66cab4 100644
--- a/service/synthetics/serializers.go
+++ b/service/synthetics/serializers.go
@@ -2193,6 +2193,11 @@ func awsRestjson1_serializeDocumentVpcConfigInput(v *types.VpcConfigInput, value
object := value.Object()
defer object.Close()
+ if v.Ipv6AllowedForDualStack != nil {
+ ok := object.Key("Ipv6AllowedForDualStack")
+ ok.Boolean(*v.Ipv6AllowedForDualStack)
+ }
+
if v.SecurityGroupIds != nil {
ok := object.Key("SecurityGroupIds")
if err := awsRestjson1_serializeDocumentSecurityGroupIds(v.SecurityGroupIds, ok); err != nil {
diff --git a/service/synthetics/types/types.go b/service/synthetics/types/types.go
index 3ed542eade0..739e6b50987 100644
--- a/service/synthetics/types/types.go
+++ b/service/synthetics/types/types.go
@@ -529,7 +529,9 @@ type VisualReferenceInput struct {
// future visual monitoring with this canary. Valid values are nextrun to use the
// screenshots from the next run after this update is made, lastrun to use the
// screenshots from the most recent run before this update was made, or the value
- // of Id in the [CanaryRun] from any past run of this canary.
+ // of Id in the [CanaryRun] from a run of this a canary in the past 31 days. If you specify
+ // the Id of a canary run older than 31 days, the operation returns a 400
+ // validation exception error..
//
// [CanaryRun]: https://docs.aws.amazon.com/AmazonSynthetics/latest/APIReference/API_CanaryRun.html
//
@@ -571,6 +573,10 @@ type VisualReferenceOutput struct {
// [Running a Canary in a VPC]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries_VPC.html
type VpcConfigInput struct {
+ // Set this to true to allow outbound IPv6 traffic on VPC canaries that are
+ // connected to dual-stack subnets. The default is false
+ Ipv6AllowedForDualStack *bool
+
// The IDs of the security groups for this canary.
SecurityGroupIds []string
@@ -587,6 +593,10 @@ type VpcConfigInput struct {
// [Running a Canary in a VPC]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries_VPC.html
type VpcConfigOutput struct {
+ // Indicates whether this canary allows outbound IPv6 traffic if it is connected
+ // to dual-stack subnets.
+ Ipv6AllowedForDualStack *bool
+
// The IDs of the security groups for this canary.
SecurityGroupIds []string
diff --git a/service/transcribestreaming/internal/testing/eventstream_test.go b/service/transcribestreaming/internal/testing/eventstream_test.go
index aa38355401b..fcc8dbac800 100644
--- a/service/transcribestreaming/internal/testing/eventstream_test.go
+++ b/service/transcribestreaming/internal/testing/eventstream_test.go
@@ -270,7 +270,7 @@ func TestStartStreamTranscription_ReadException(t *testing.T) {
expectedErr,
&types.BadRequestException{Message: aws.String("this is an exception message")},
); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
}
@@ -328,7 +328,7 @@ func TestStartStreamTranscription_ReadUnmodeledException(t *testing.T) {
Message: "this is an unmodeled exception message",
},
); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
}
@@ -390,7 +390,7 @@ func TestStartStreamTranscription_ReadErrorEvent(t *testing.T) {
Message: "An error message",
},
); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
}
@@ -649,7 +649,7 @@ func TestStartStreamTranscription_ResponseError(t *testing.T) {
Message: "this is an exception message",
},
); len(diff) > 0 {
- t.Errorf(diff)
+ t.Error(diff)
}
}
diff --git a/service/transfer/CHANGELOG.md b/service/transfer/CHANGELOG.md
index 1fa1303304e..0f7c215e160 100644
--- a/service/transfer/CHANGELOG.md
+++ b/service/transfer/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.55.0 (2024-12-18)
+
+* **Feature**: Added AS2 agreement configurations to control filename preservation and message signing enforcement. Added AS2 connector configuration to preserve content type from S3 objects.
+
# v1.54.0 (2024-12-02)
* **Feature**: AWS Transfer Family now offers Web apps that enables simple and secure access to data stored in Amazon S3.
diff --git a/service/transfer/api_op_CreateAgreement.go b/service/transfer/api_op_CreateAgreement.go
index a939d42bc32..e056d3d1f50 100644
--- a/service/transfer/api_op_CreateAgreement.go
+++ b/service/transfer/api_op_CreateAgreement.go
@@ -95,6 +95,28 @@ type CreateAgreementInput struct {
// A name or short description to identify the agreement.
Description *string
+ // Determines whether or not unsigned messages from your trading partners will be
+ // accepted.
+ //
+ // - ENABLED : Transfer Family rejects unsigned messages from your trading
+ // partner.
+ //
+ // - DISABLED (default value): Transfer Family accepts unsigned messages from
+ // your trading partner.
+ EnforceMessageSigning types.EnforceMessageSigningType
+
+ // Determines whether or not Transfer Family appends a unique string of
+ // characters to the end of the AS2 message payload filename when saving it.
+ //
+ // - ENABLED : the filename provided by your trading parter is preserved when the
+ // file is saved.
+ //
+ // - DISABLED (default value): when Transfer Family saves the file, the filename
+ // is adjusted, as described in [File names and locations].
+ //
+ // [File names and locations]: https://docs.aws.amazon.com/transfer/latest/userguide/send-as2-messages.html#file-names-as2
+ PreserveFilename types.PreserveFilenameType
+
// The status of the agreement. The agreement can be either ACTIVE or INACTIVE .
Status types.AgreementStatusType
diff --git a/service/transfer/api_op_UpdateAgreement.go b/service/transfer/api_op_UpdateAgreement.go
index dacabc4a45a..0d24614b1a9 100644
--- a/service/transfer/api_op_UpdateAgreement.go
+++ b/service/transfer/api_op_UpdateAgreement.go
@@ -83,6 +83,16 @@ type UpdateAgreementInput struct {
// agreement.
Description *string
+ // Determines whether or not unsigned messages from your trading partners will be
+ // accepted.
+ //
+ // - ENABLED : Transfer Family rejects unsigned messages from your trading
+ // partner.
+ //
+ // - DISABLED (default value): Transfer Family accepts unsigned messages from
+ // your trading partner.
+ EnforceMessageSigning types.EnforceMessageSigningType
+
// A unique identifier for the AS2 local profile.
//
// To change the local profile identifier, provide a new value here.
@@ -92,6 +102,18 @@ type UpdateAgreementInput struct {
// identifier, provide a new value here.
PartnerProfileId *string
+ // Determines whether or not Transfer Family appends a unique string of
+ // characters to the end of the AS2 message payload filename when saving it.
+ //
+ // - ENABLED : the filename provided by your trading parter is preserved when the
+ // file is saved.
+ //
+ // - DISABLED (default value): when Transfer Family saves the file, the filename
+ // is adjusted, as described in [File names and locations].
+ //
+ // [File names and locations]: https://docs.aws.amazon.com/transfer/latest/userguide/send-as2-messages.html#file-names-as2
+ PreserveFilename types.PreserveFilenameType
+
// You can update the status for the agreement, either activating an inactive
// agreement or the reverse.
Status types.AgreementStatusType
diff --git a/service/transfer/deserializers.go b/service/transfer/deserializers.go
index 870d99a8b01..6662089bdd9 100644
--- a/service/transfer/deserializers.go
+++ b/service/transfer/deserializers.go
@@ -8618,6 +8618,15 @@ func awsAwsjson11_deserializeDocumentAs2ConnectorConfig(v **types.As2ConnectorCo
sv.PartnerProfileId = ptr.String(jtv)
}
+ case "PreserveContentType":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected PreserveContentType to be of type string, got %T instead", value)
+ }
+ sv.PreserveContentType = types.PreserveContentType(jtv)
+ }
+
case "SigningAlgorithm":
if value != nil {
jtv, ok := value.(string)
@@ -9257,6 +9266,15 @@ func awsAwsjson11_deserializeDocumentDescribedAgreement(v **types.DescribedAgree
sv.Description = ptr.String(jtv)
}
+ case "EnforceMessageSigning":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected EnforceMessageSigningType to be of type string, got %T instead", value)
+ }
+ sv.EnforceMessageSigning = types.EnforceMessageSigningType(jtv)
+ }
+
case "LocalProfileId":
if value != nil {
jtv, ok := value.(string)
@@ -9275,6 +9293,15 @@ func awsAwsjson11_deserializeDocumentDescribedAgreement(v **types.DescribedAgree
sv.PartnerProfileId = ptr.String(jtv)
}
+ case "PreserveFilename":
+ if value != nil {
+ jtv, ok := value.(string)
+ if !ok {
+ return fmt.Errorf("expected PreserveFilenameType to be of type string, got %T instead", value)
+ }
+ sv.PreserveFilename = types.PreserveFilenameType(jtv)
+ }
+
case "ServerId":
if value != nil {
jtv, ok := value.(string)
diff --git a/service/transfer/go_module_metadata.go b/service/transfer/go_module_metadata.go
index 1dfc4e5bf44..2286f8b135c 100644
--- a/service/transfer/go_module_metadata.go
+++ b/service/transfer/go_module_metadata.go
@@ -3,4 +3,4 @@
package transfer
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.54.0"
+const goModuleVersion = "1.55.0"
diff --git a/service/transfer/serializers.go b/service/transfer/serializers.go
index fad6a98984d..16ca73b470d 100644
--- a/service/transfer/serializers.go
+++ b/service/transfer/serializers.go
@@ -4280,6 +4280,11 @@ func awsAwsjson11_serializeDocumentAs2ConnectorConfig(v *types.As2ConnectorConfi
ok.String(*v.PartnerProfileId)
}
+ if len(v.PreserveContentType) > 0 {
+ ok := object.Key("PreserveContentType")
+ ok.String(string(v.PreserveContentType))
+ }
+
if len(v.SigningAlgorithm) > 0 {
ok := object.Key("SigningAlgorithm")
ok.String(string(v.SigningAlgorithm))
@@ -5108,6 +5113,11 @@ func awsAwsjson11_serializeOpDocumentCreateAgreementInput(v *CreateAgreementInpu
ok.String(*v.Description)
}
+ if len(v.EnforceMessageSigning) > 0 {
+ ok := object.Key("EnforceMessageSigning")
+ ok.String(string(v.EnforceMessageSigning))
+ }
+
if v.LocalProfileId != nil {
ok := object.Key("LocalProfileId")
ok.String(*v.LocalProfileId)
@@ -5118,6 +5128,11 @@ func awsAwsjson11_serializeOpDocumentCreateAgreementInput(v *CreateAgreementInpu
ok.String(*v.PartnerProfileId)
}
+ if len(v.PreserveFilename) > 0 {
+ ok := object.Key("PreserveFilename")
+ ok.String(string(v.PreserveFilename))
+ }
+
if v.ServerId != nil {
ok := object.Key("ServerId")
ok.String(*v.ServerId)
@@ -6463,6 +6478,11 @@ func awsAwsjson11_serializeOpDocumentUpdateAgreementInput(v *UpdateAgreementInpu
ok.String(*v.Description)
}
+ if len(v.EnforceMessageSigning) > 0 {
+ ok := object.Key("EnforceMessageSigning")
+ ok.String(string(v.EnforceMessageSigning))
+ }
+
if v.LocalProfileId != nil {
ok := object.Key("LocalProfileId")
ok.String(*v.LocalProfileId)
@@ -6473,6 +6493,11 @@ func awsAwsjson11_serializeOpDocumentUpdateAgreementInput(v *UpdateAgreementInpu
ok.String(*v.PartnerProfileId)
}
+ if len(v.PreserveFilename) > 0 {
+ ok := object.Key("PreserveFilename")
+ ok.String(string(v.PreserveFilename))
+ }
+
if v.ServerId != nil {
ok := object.Key("ServerId")
ok.String(*v.ServerId)
diff --git a/service/transfer/types/enums.go b/service/transfer/types/enums.go
index eec1403b68a..45660cf0777 100644
--- a/service/transfer/types/enums.go
+++ b/service/transfer/types/enums.go
@@ -239,6 +239,25 @@ func (EndpointType) Values() []EndpointType {
}
}
+type EnforceMessageSigningType string
+
+// Enum values for EnforceMessageSigningType
+const (
+ EnforceMessageSigningTypeEnabled EnforceMessageSigningType = "ENABLED"
+ EnforceMessageSigningTypeDisabled EnforceMessageSigningType = "DISABLED"
+)
+
+// Values returns all known values for EnforceMessageSigningType. Note that this
+// can be expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (EnforceMessageSigningType) Values() []EnforceMessageSigningType {
+ return []EnforceMessageSigningType{
+ "ENABLED",
+ "DISABLED",
+ }
+}
+
type ExecutionErrorType string
// Enum values for ExecutionErrorType
@@ -419,6 +438,44 @@ func (OverwriteExisting) Values() []OverwriteExisting {
}
}
+type PreserveContentType string
+
+// Enum values for PreserveContentType
+const (
+ PreserveContentTypeEnabled PreserveContentType = "ENABLED"
+ PreserveContentTypeDisabled PreserveContentType = "DISABLED"
+)
+
+// Values returns all known values for PreserveContentType. Note that this can be
+// expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (PreserveContentType) Values() []PreserveContentType {
+ return []PreserveContentType{
+ "ENABLED",
+ "DISABLED",
+ }
+}
+
+type PreserveFilenameType string
+
+// Enum values for PreserveFilenameType
+const (
+ PreserveFilenameTypeEnabled PreserveFilenameType = "ENABLED"
+ PreserveFilenameTypeDisabled PreserveFilenameType = "DISABLED"
+)
+
+// Values returns all known values for PreserveFilenameType. Note that this can be
+// expanded in the future, and so it is only as up to date as the client.
+//
+// The ordering of this slice is not guaranteed to be stable across updates.
+func (PreserveFilenameType) Values() []PreserveFilenameType {
+ return []PreserveFilenameType{
+ "ENABLED",
+ "DISABLED",
+ }
+}
+
type ProfileType string
// Enum values for ProfileType
diff --git a/service/transfer/types/types.go b/service/transfer/types/types.go
index d2a88709dfb..ae86a4283f1 100644
--- a/service/transfer/types/types.go
+++ b/service/transfer/types/types.go
@@ -84,6 +84,13 @@ type As2ConnectorConfig struct {
// A unique identifier for the partner profile for the connector.
PartnerProfileId *string
+ // Allows you to use the Amazon S3 Content-Type that is associated with objects in
+ // S3 instead of having the content type mapped based on the file extension. This
+ // parameter is enabled by default when you create an AS2 connector from the
+ // console, but disabled by default when you create an AS2 connector by calling the
+ // API directly.
+ PreserveContentType PreserveContentType
+
// The algorithm that is used to sign the AS2 messages sent with the connector.
SigningAlgorithm SigningAlg
@@ -397,12 +404,34 @@ type DescribedAgreement struct {
// The name or short description that's used to identify the agreement.
Description *string
+ // Determines whether or not unsigned messages from your trading partners will be
+ // accepted.
+ //
+ // - ENABLED : Transfer Family rejects unsigned messages from your trading
+ // partner.
+ //
+ // - DISABLED (default value): Transfer Family accepts unsigned messages from
+ // your trading partner.
+ EnforceMessageSigning EnforceMessageSigningType
+
// A unique identifier for the AS2 local profile.
LocalProfileId *string
// A unique identifier for the partner profile used in the agreement.
PartnerProfileId *string
+ // Determines whether or not Transfer Family appends a unique string of
+ // characters to the end of the AS2 message payload filename when saving it.
+ //
+ // - ENABLED : the filename provided by your trading parter is preserved when the
+ // file is saved.
+ //
+ // - DISABLED (default value): when Transfer Family saves the file, the filename
+ // is adjusted, as described in [File names and locations].
+ //
+ // [File names and locations]: https://docs.aws.amazon.com/transfer/latest/userguide/send-as2-messages.html#file-names-as2
+ PreserveFilename PreserveFilenameType
+
// A system-assigned unique identifier for a server instance. This identifier
// indicates the specific server that the agreement uses.
ServerId *string
@@ -452,9 +481,8 @@ type DescribedCertificate struct {
// The serial number for the certificate.
Serial *string
- // The certificate can be either ACTIVE , PENDING_ROTATION , or INACTIVE .
- // PENDING_ROTATION means that this certificate will replace the current
- // certificate when it expires.
+ // Currently, the only available status is ACTIVE : all other values are reserved
+ // for future use.
Status CertificateStatusType
// Key-value pairs that can be used to group and search for certificates.
diff --git a/service/trustedadvisor/CHANGELOG.md b/service/trustedadvisor/CHANGELOG.md
index 73be8948b2e..b760a0b9333 100644
--- a/service/trustedadvisor/CHANGELOG.md
+++ b/service/trustedadvisor/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.8.8 (2024-12-13)
+
+* No change notes available for this release.
+
# v1.8.7 (2024-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
diff --git a/service/trustedadvisor/go_module_metadata.go b/service/trustedadvisor/go_module_metadata.go
index 884ea004b8c..4704ab69959 100644
--- a/service/trustedadvisor/go_module_metadata.go
+++ b/service/trustedadvisor/go_module_metadata.go
@@ -3,4 +3,4 @@
package trustedadvisor
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.8.7"
+const goModuleVersion = "1.8.8"
diff --git a/service/trustedadvisor/internal/endpoints/endpoints.go b/service/trustedadvisor/internal/endpoints/endpoints.go
index fde3fcbeb1f..c80f9ff85bc 100644
--- a/service/trustedadvisor/internal/endpoints/endpoints.go
+++ b/service/trustedadvisor/internal/endpoints/endpoints.go
@@ -138,6 +138,32 @@ var defaultPartitions = endpoints.Partitions{
},
RegionRegex: partitionRegexp.Aws,
IsRegionalized: true,
+ Endpoints: endpoints.Endpoints{
+ endpoints.EndpointKey{
+ Region: "fips-us-east-1",
+ }: endpoints.Endpoint{
+ Hostname: "trustedadvisor-fips.us-east-1.api.aws",
+ CredentialScope: endpoints.CredentialScope{
+ Region: "us-east-1",
+ },
+ },
+ endpoints.EndpointKey{
+ Region: "fips-us-east-2",
+ }: endpoints.Endpoint{
+ Hostname: "trustedadvisor-fips.us-east-2.api.aws",
+ CredentialScope: endpoints.CredentialScope{
+ Region: "us-east-2",
+ },
+ },
+ endpoints.EndpointKey{
+ Region: "fips-us-west-2",
+ }: endpoints.Endpoint{
+ Hostname: "trustedadvisor-fips.us-west-2.api.aws",
+ CredentialScope: endpoints.CredentialScope{
+ Region: "us-west-2",
+ },
+ },
+ },
},
{
ID: "aws-cn",
diff --git a/service/vpclattice/CHANGELOG.md b/service/vpclattice/CHANGELOG.md
index d7819a3cfcf..f531402dead 100644
--- a/service/vpclattice/CHANGELOG.md
+++ b/service/vpclattice/CHANGELOG.md
@@ -1,3 +1,7 @@
+# v1.13.2 (2024-12-17)
+
+* No change notes available for this release.
+
# v1.13.1 (2024-12-11)
* No change notes available for this release.
diff --git a/service/vpclattice/go_module_metadata.go b/service/vpclattice/go_module_metadata.go
index 062e1f27e6c..3cb2788237b 100644
--- a/service/vpclattice/go_module_metadata.go
+++ b/service/vpclattice/go_module_metadata.go
@@ -3,4 +3,4 @@
package vpclattice
// goModuleVersion is the tagged release for this module
-const goModuleVersion = "1.13.1"
+const goModuleVersion = "1.13.2"
diff --git a/service/vpclattice/internal/endpoints/endpoints.go b/service/vpclattice/internal/endpoints/endpoints.go
index 1e979c6455d..bce707a1d7a 100644
--- a/service/vpclattice/internal/endpoints/endpoints.go
+++ b/service/vpclattice/internal/endpoints/endpoints.go
@@ -175,6 +175,9 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "ca-central-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "ca-west-1",
+ }: endpoints.Endpoint{},
endpoints.EndpointKey{
Region: "eu-central-1",
}: endpoints.Endpoint{},
@@ -187,6 +190,9 @@ var defaultPartitions = endpoints.Partitions{
endpoints.EndpointKey{
Region: "eu-south-1",
}: endpoints.Endpoint{},
+ endpoints.EndpointKey{
+ Region: "eu-south-2",
+ }: endpoints.Endpoint{},
endpoints.EndpointKey{
Region: "eu-west-1",
}: endpoints.Endpoint{},