You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If endpoint_url is not passed in the boto3.resource() constructor for an SQS connection, boto3 uses the legacy service endpoint queue.amazonaws.com by default, rather than the new region-specific (e.g. sqs.us-east-1.amazonaws.com) url. If deployed to AWS, this behavior works ok if the container is deployed in a public subnet but is problematic if the container is deployed into a private subnet with SQS access provided via VPC endpoint.
if CONFIG.AWS_ACCESS_KEY_ID is None:
# Running in AWS
# Using IAM Role for Credentials
CLIENT = boto3.resource('sqs')
else:
# Local Testing
# ElasticMQ with Credentials via AWS_ environment variables
CLIENT = boto3.resource(
'sqs',
endpoint_url=CONFIG.ENDPOINT_URL_SQS,
region_name=CONFIG.AWS_REGION_SQS,
aws_secret_access_key=CONFIG.AWS_SECRET_ACCESS_KEY_SQS,
aws_access_key_id=CONFIG.AWS_ACCESS_KEY_ID_SQS,
use_ssl=CONFIG.USE_SSL
)
with
if CONFIG.AWS_ACCESS_KEY_ID == "x":
# Running in AWS
# Using IAM Role for Credentials
if CONFIG.ENDPOINT_URL_SQS:
CLIENT = boto3.resource(
"sqs",
endpoint_url=CONFIG.ENDPOINT_URL_SQS,
)
else:
CLIENT = boto3.resource("sqs")
else:
# Local Testing
# ElasticMQ with Credentials via AWS_ environment variables
CLIENT = boto3.resource(
"sqs",
endpoint_url=CONFIG.ENDPOINT_URL_SQS,
region_name=CONFIG.AWS_REGION_SQS,
aws_secret_access_key=CONFIG.AWS_SECRET_ACCESS_KEY_SQS,
aws_access_key_id=CONFIG.AWS_ACCESS_KEY_ID_SQS,
use_ssl=CONFIG.USE_SSL,
)
and provide the appropriate Env Variable ENDPOINT_URL_SQS, everything should work.
(The same change was recently required on water-api here )
Describe the bug
If
endpoint_url
is not passed in the boto3.resource() constructor for an SQS connection, boto3 uses the legacy service endpoint queue.amazonaws.com by default, rather than the new region-specific (e.g. sqs.us-east-1.amazonaws.com) url. If deployed to AWS, this behavior works ok if the container is deployed in a public subnet but is problematic if the container is deployed into a private subnet with SQS access provided via VPC endpoint.Details: Boto3 is still using the legacy AWS Service Endpoints (i.e. queue.amazonaws.com ) rather than sqs.us-east-1.amazonaws.com in order to support Python 2. (Endpoints reference: https://docs.aws.amazon.com/general/latest/gr/sqs-service.html). See github issue here: boto/boto3#1900.
Suggested Fix: Always use ENDPOINT_URL_SQS in the
boto3.resource()
constructor if provided via environment variables.Near here, if we replace
with
and provide the appropriate Env Variable
ENDPOINT_URL_SQS
, everything should work.(The same change was recently required on water-api here )
I'll go ahead and make the change via PR. cc @adamscarberry @jeffsuperglide
The text was updated successfully, but these errors were encountered: