Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document the use of max_delay and somehow make it possible to use it without inspecting the library source #160

Open
nicholasd-ff opened this issue Mar 13, 2024 · 3 comments

Comments

@nicholasd-ff
Copy link

$ python --version 
Python 3.10.12
$ $ pip3 show pyrate_limiter
Name: pyrate-limiter
Version: 3.4.1
Summary: Python Rate-Limiter using Leaky-Bucket Algorithm
Home-page: https://github.com/vutran1710/PyrateLimiter
Author: vutr
Author-email: [email protected]
License: MIT
Location: /home/nicholasd-ff/.local/share/virtualenvs/sbert-ingest-lambda-AE7HPjkv/lib/python3.10/site-packages
Requires: 
Required-by: 

Hi, thanks for all your work on pyrate limiter. I've been testing my code after upgrading from the 2 series to 3.4.1 and have run into some issues with max delay.

In the previous major version of the library it was easy to create a requester that just blocked until it was safe to make a new request, now it seems like max_delay must be set in order to block and setting max_delay requires a formula based on the maximum wait period and knowledge of the internals of the engine. (specifically that 50ms is added to the max_delay so you can't just set it to 1000ms if you want 1 request a second.

For example:

#!/usr/bin/env python3
from __future__ import annotations
from pyrate_limiter import Duration, Limiter, Rate
import time
def make_reqs(l: Limiter):
  start = time.time()
  for i in range(3):
    print(f"\tAttempt: {i} Success: {l.try_acquire('some-bucket')} at T {time.time() - start:.2f}s")

print("With raise_when_fail=False, no max_delay")
make_reqs(Limiter(Rate(1, Duration.SECOND), raise_when_fail=False))
print("With raise_when_fail=False, max_delay=1000")
make_reqs(Limiter(Rate(1, Duration.SECOND), raise_when_fail=False, max_delay=1000))
print("With raise_when_fail=False, max_delay=1500")
make_reqs(Limiter(Rate(1, Duration.SECOND), raise_when_fail=False, max_delay=1500))

Produces the following output:

With raise_when_fail=False, no max_delay
        Attempt: 0 Success: True at T 0.00s
        Attempt: 1 Success: False at T 0.00s
        Attempt: 2 Success: False at T 0.00s
With raise_when_fail=False, max_delay=1000
        Attempt: 0 Success: True at T 0.00s
Required delay too large: actual=1050, expected=1000
        Attempt: 1 Success: False at T 0.00s
Required delay too large: actual=1050, expected=1000
        Attempt: 2 Success: False at T 0.00s
With raise_when_fail=False, max_delay=1500
        Attempt: 0 Success: True at T 0.00s
        Attempt: 1 Success: True at T 1.05s
        Attempt: 2 Success: True at T 2.10s

With a set of rates (e..g 1 request /second 500/hour) it becomes quite complicated to calculate max_delay such that the API actually blocks, otherwise it's just immediately going to return false and you have to implement your own loop to retry.

@vutran1710
Copy link
Owner

Agree, I will work on it soon

@nicholasd-ff
Copy link
Author

nicholasd-ff commented Mar 13, 2024

I'm happy to contribute a PR and tests if you can explain how you think it should work, the options that make sense to me are:

  • sleep for bucket.waiting(item) if raise_when_fail is False and max_wait is unset.
  • downgrade the logger.error "Required delay too large" to a warning and sleep for bucket.waiting(item) but only if raise_on_fail is False.

But I admit I haven't thought through all the corner cases this behaviour might produce.

@vutran1710
Copy link
Owner

Sound good to me.
Testing the delay & wait should be easy using in-memory-bucket i think

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants