-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TransportError ... is now earlier than... occurring irregularly #2618
Comments
We are storing credentials as a singleton that updates when they have been detected as out of date.
@stobrien89 Yes this is correct. |
Hi @risoms thanks for following up. I found similar issues here and here that suggest this is a clock sync issue. Have you tried syncing your machine’s clock with NTP? For example running |
@tim-finnigan No luck. The issue still persists after introducing syncing with sudo ntpdate. Also the deltatime between t1-t0 can sometimes be much larger than the examples provided. We've seen cases where this can be on the order of days and weeks. |
Also is there caching that occurs with this authentication? The t0 can vary across observations. For instance we can see t0 be 20220214T034700, and a few minutes later 20220214T011200 (yes the value decreased). |
Hi @risoms can you describe in more detail how you are authenticating? You can see the boto3 credentials documentation for reference. There is caching described here with
Also are you still “AWS Ubuntu 16.04.3 LTS xenial”. Have you seen this issue on any other systems? |
@tim-finnigan I've included the code used to authenticate below. The authentication is stored as a singleton. credentials = boto3.session.Session().get_credentials()
auth = AWS4Auth(
credentials.access_key,
credentials.secret_key,
region,
service,
session_token=credentials.token,
) When it's detected as expired (using the query below), we re-authenticate. cls.credentials.refresh_needed() Yes we're still on AWS Ubuntu 16.04.3 LTS xenial. This issue hasn't occurred anywhere else. |
Hi @risoms is there a pattern as far as which services/APIs are used when this error occurs? Do you have any third-party packages installed that may be causing a clock skew issue? I know this link was shared in your previous issue but I want to post it again in case you want to retry configuring chrony: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service-ubuntu Also I saw the JS SDK offers a correctClockSkew param config option and there is an open feature request to add that to boto3: boto/boto3#1252 |
No patterns that we've been able to identify. Some things to note:
|
We've attempted using chrony to resolve this with no luck sadly. |
Hi @risoms I’m still not sure if this is a boto3/botocore or issue or a local clock sync issue. The fact that this happens in one of your Docker containers but not the other makes me think that there is something specific to that container that is causing this. Do you think it could be something to do with how your Flask application is set up? |
Greetings! It looks like this issue hasn’t been active in longer than five days. We encourage you to check if this is still an issue in the latest release. In the absence of more information, we will be closing this issue soon. If you find that this is still a problem, please feel free to provide a comment or upvote with a reaction on the initial post to prevent automatic closure. If the issue is already closed, please feel free to open a new one. |
This can safely be closed. The issue was very much not related to boto3. We were able to identify the cause as related to the library freezegun being used concurrent to querying on AWS. Thank you for your patience and help with this! |
Ok thanks for letting us know! Glad you were able to identify the problem. |
|
Describe the bug
Note - This is the same issue here. As suggested, I'm reopening because the recommend solutions unfortunately haven't worked.
We are seeing a regular occurrence of TransportError using boto3. This can occur in the magnitude of minutes as well as days.
Steps to reproduce
Expected behavior
When using elasticsearch.helpers.scan, we're expecting to see an abstraction of elasticsearch scroll() API. Usually this works.
Debug logs
The text was updated successfully, but these errors were encountered: