You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I tried to use this library to avoid exceeding rate limits to an external API. I tested the rate limiter locally on one user and everything worked like a charm. But when I deployed the rate limiter in production on 100 users, I started catching exceptions in Sentry:
Re-acquiring with delay expected to be successful, if it failed then either clock or bucket is probably unstable
and
File "/opt/pysetup/.venv/lib/python3.12/site-packages/pyrate_limiter/limiter.py", line 322, in _handle_async_result
result = await result
└ <coroutine object Limiter.handle_bucket_put.<locals>._put_async at 0xed5962020930>
File "/opt/pysetup/.venv/lib/python3.12/site-packages/pyrate_limiter/limiter.py", line 257, in _put_async
result = await result
└ <coroutine object Limiter.delay_or_raise.<locals>._handle_async at 0xed59621c8540>
File "/opt/pysetup/.venv/lib/python3.12/site-packages/pyrate_limiter/limiter.py", line 180, in _handle_async
delay = await delay
└ <coroutine object AbstractBucket.waiting.<locals>._calc_waiting_async at 0xed5962096500>
File "/opt/pysetup/.venv/lib/python3.12/site-packages/pyrate_limiter/abstracts/bucket.py", line 99, in _calc_waiting_async
return _calc_waiting(bound_item)
│ └ <pyrate_limiter.abstracts.rate.RateItem object at 0xed596209ddc0>
└ <function AbstractBucket.waiting.<locals>._calc_waiting at 0xed59621eb6a0>
File "/opt/pysetup/.venv/lib/python3.12/site-packages/pyrate_limiter/abstracts/bucket.py", line 83, in _calc_waiting
assert self.failing_rate is not None # NOTE: silence mypy
│ └ None
└ <pyrate_limiter.buckets.redis_bucket.RedisBucket object at 0xed5962fd87a0>
AssertionError: assert self.failing_rate is not None # NOTE: silence mypy
I don't quite get why re-acquiring has to be successful. If there are multiple concurrent workers sending requests to API, many of them might exhaust rate limits and go to asyncio.sleep. After the delay, the worker might exceed the limit again, if other workers made requests. I was thinking about implementing the queue with exactly one consumer which will send API requests, but in this case I need the response back. Implementing RPCs is not quite trivial and requires message broker.
The text was updated successfully, but these errors were encountered:
the mechanism behind re-acquiring is, we try to measure the time until the next slot for the item to become available - then we sleep and wait. So technically after such time has passed, the slot should be free and the bucket should be able to accept the waiting item to come it.
So technically after such time has passed, the slot should be free
In single-threaded applications it should be fine, but when there are multiple threads or coroutines, the race condition might occur. Is there any workaround for such scenarios?
So technically after such time has passed, the slot should be free
In single-threaded applications it should be fine, but when there are multiple threads or coroutines, the race condition might occur. Is there any workaround for such scenarios?
Must admit I havent spent enough time testing it out on the multi-thread environments. Therefore im not aware of any workaround yet
Hi, I tried to use this library to avoid exceeding rate limits to an external API. I tested the rate limiter locally on one user and everything worked like a charm. But when I deployed the rate limiter in production on 100 users, I started catching exceptions in Sentry:
and
I don't quite get why re-acquiring has to be successful. If there are multiple concurrent workers sending requests to API, many of them might exhaust rate limits and go to
asyncio.sleep
. After the delay, the worker might exceed the limit again, if other workers made requests. I was thinking about implementing the queue with exactly one consumer which will send API requests, but in this case I need the response back. Implementing RPCs is not quite trivial and requires message broker.The text was updated successfully, but these errors were encountered: