You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
00:26:01 Error: failed to update flag "feature-x" in project "default": 400 Bad Request: {"code":"invalid_request","message":"Flag salt cannot be modified"}
This was seen during a time where many launchdarkly_feature_flag_environment applications were occurring simultaneously in separate executions of terraform.
Subsequent individual re-runs of those terraform executions seem to work out ok and haven't seen this outside of that time.
Also saw many of these in the logs:
00:22:00 2020-08-03T04:22:00.073Z [DEBUG] plugin.terraform-provider-launchdarkly_v1.3.2_x4: 2020/08/03 04:22:00 [DEBUG] received a 429 Too Many Requests error. retrying
00:22:00 2020-08-03T04:22:00.074Z [DEBUG] plugin.terraform-provider-launchdarkly_v1.3.2_x4: 2020/08/03 04:22:00 [DEBUG] sleeping 59.926172523s
Not entirely sure if its the cause, but they do seem to correlate.
It appears that the random sleep is only used when an invalid rate limit header is returned (which was not the case here, didnt see those other debug messages). For our usage, the flood of simultaneous creations all using terraform would conceivably all wait the prescribed ~60 seconds (as returned by the rate-limit header) to then all flood the apis at the same time again creating another round of 429s.
I think that adding a random element to that wait would be reasonable. A tunable random proportion of the returned rate limit time, and could have a default value of 0 to disable the random time add.
In this case adds an up to 50% wait increase (1-30 seconds for a returned 60 second rate-limit header) to the returned rate-limit header wait time.
The text was updated successfully, but these errors were encountered:
wondersd
changed the title
launch_darkly_feature_flag_environment apply can return 'Flag salt cannot be modified'
launchdarkly_feature_flag_environment apply can return 'Flag salt cannot be modified'
Aug 3, 2020
I like the idea of adding rate_limit_random_percentage. I'm not sure if it will help with "Flag salt cannot be modified" but it certainly won't hurt. I've created an internal ticket to investigate further and I'll reach out to the application team to get more details about what circumstances can cause this 400.
BTW, can you elaborate on this:
This was seen during a time where many launchdarkly_feature_flag_environment applications were occurring simultaneously in separate executions of terraform.
Were the separate execution's trying to modify the same feature flag(s)?
The separate executions of terraform seek to manage independent environments of the same project. So they share feature flags in that each environment of the project contains the feature flags of the project, but the variation rules for those flags are unique to the environment (or at least replicated per environment).
default_project/ # managed outside of terraform
feature_flag_a # managed outside of terraform
feature_flag_b # managed outside of terraform
environment1/ # managed by terraform execution 1 via 'launchdarkly_environment'
feature_flag_a # managed by terraform execution 1 via 'launchdarkly_feature_flag_environment'
feature_flag_b # managed by terraform execution 1 via 'launchdarkly_feature_flag_environment'
environment2/ # managed by terraform execution 2 via 'launchdarkly_environment'
feature_flag_a # managed by terraform execution 2 via 'launchdarkly_feature_flag_environment'
feature_flag_b # managed by terraform execution 2 via 'launchdarkly_feature_flag_environment'
This was seen during a time where many
launchdarkly_feature_flag_environment
applications were occurring simultaneously in separate executions of terraform.Subsequent individual re-runs of those terraform executions seem to work out ok and haven't seen this outside of that time.
Also saw many of these in the logs:
Not entirely sure if its the cause, but they do seem to correlate.
Took a look at that retry in the code:
https://github.com/launchdarkly/terraform-provider-launchdarkly/blob/master/launchdarkly/helper.go
It appears that the random sleep is only used when an invalid rate limit header is returned (which was not the case here, didnt see those other debug messages). For our usage, the flood of simultaneous creations all using terraform would conceivably all wait the prescribed ~60 seconds (as returned by the rate-limit header) to then all flood the apis at the same time again creating another round of 429s.
I think that adding a random element to that wait would be reasonable. A tunable random proportion of the returned rate limit time, and could have a default value of 0 to disable the random time add.
In this case adds an up to 50% wait increase (1-30 seconds for a returned 60 second rate-limit header) to the returned rate-limit header wait time.
The text was updated successfully, but these errors were encountered: