-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
http ratelimit: option to reduce budget on stream done #37548
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Takeshi Yoneda <[email protected]>
CC @envoyproxy/api-shepherds: Your approval is needed for changes made to |
api/envoy/extensions/filters/http/ratelimit/v3/rate_limit.proto
Outdated
Show resolved
Hide resolved
i guess the impl can be a bit large, so I might do that in separate PRs - anyways will think about it after the API gets approved |
Signed-off-by: Takeshi Yoneda <[email protected]>
Signed-off-by: Takeshi Yoneda <[email protected]>
wow, we have a similar requirement internally and I finally figured out a similar way. It is super surprised and happy to see this. |
cool glad to hear that you came to the similar idea! |
Signed-off-by: Takeshi Yoneda <[email protected]>
…both for clarity Signed-off-by: Takeshi Yoneda <[email protected]>
…nd for future extension Signed-off-by: Takeshi Yoneda <[email protected]>
@wbpcode thank you for the valuable feedback offline! I think I will go ahead and try implementing the idea - i don't think the change won't be that huge |
Signed-off-by: Takeshi Yoneda <[email protected]>
Signed-off-by: Takeshi Yoneda <[email protected]>
This comment was marked as outdated.
This comment was marked as outdated.
Signed-off-by: Takeshi Yoneda <[email protected]>
This comment was marked as outdated.
This comment was marked as outdated.
Signed-off-by: Takeshi Yoneda <[email protected]>
Signed-off-by: Takeshi Yoneda <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from an API perspective, LGTM thanks ! This would help users be able to also count based response attributes into the same RL bucket
now i am working on polishing the impl... |
Signed-off-by: Takeshi Yoneda <[email protected]>
@wbpcode i feel like the impl is ready for review- much simpler than I thought. Could you review that? and meanwhile I will add more tests |
Signed-off-by: Takeshi Yoneda <[email protected]>
Signed-off-by: Takeshi Yoneda <[email protected]>
Signed-off-by: Takeshi Yoneda <[email protected]>
Signed-off-by: Takeshi Yoneda <[email protected]>
Signed-off-by: Takeshi Yoneda <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this great contribution and sorry for the delay. Add a comment to the API.
/wait
|
||
// If true, rate limit requests will also be sent to the rate limit service when the stream completes. | ||
// This is useful when the rate limit budget needs to reflect the response context that is not available | ||
// on the request path. | ||
// | ||
// On the stream completion, the filter will reuse the exact same descriptors matched during the request path. | ||
// In other words, the descriptors are not recalculated on the stream completion, but the rate limit requests | ||
// are sent with the same descriptors as the original request sent during the request path. | ||
// For example, request header matching descriptors are available on the stream completion. | ||
// | ||
// For example, let's say the upstream service calculates the usage statistics, returns them in the response body | ||
// and we want to utilize these numbers to apply the rate limit action for the subsequent requests. | ||
// Combined with another filter that can set ``envoy.ratelimit.hits_addend`` based on the response (e.g. Lua filter), | ||
// this can be used to subtract the usage statistics from the rate limit budget. | ||
// | ||
// The rate limit requests sent on the stream completion are "fire-and-forget" by nature, and rate limit is not enforced | ||
// on the current HTTP stream being completed. The filter will only update the budget for the subsequent requests at | ||
// that point. Hence the effect of the rate limit requests made during the stream completion is not visible in the current | ||
// but only in the subsequent requests. | ||
bool apply_on_stream_done = 14; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
Rather than a filter level flag, I think it should be per-descriptor level (in the route.v3.RateLimit). Then different routes could have different choice. And more important, different resource could also have different choice. Like the qps limit need only to works at request path but only the ai tokens limit need to works at response.
-
This semantics should be clear: request or response, but not for both. When we populate the descritpors, we can tell the the populateDescritors() what phase is and lets the populateDescritpors() only populate related descritpros. This bring a complexity that the users need to configure two descritpors: one for request with 0 or 1 hits addend for check only, one for response with CUSTOM_HITS_ADDEND or CUSTOM_HITS_ADDEND - 1 for report. But the semantics is more clear and behavior is more predictable. If we let same descritpor works on both request and response, we must ensure we can get correct different values for
hits_addend
from same source, that's weird.
Commit Message: ratelimit: option to excute action on stream done
Additional Description:
This adds a new option
apply_on_stream_done
to the Actionmessage of the http ratelimit. This basically allows to configure
actions to be executed in a response content-aware way and do not
enforce the rate limit (in other words "fire-and-forget"). Since addend
can be currently controlled via
envoy.ratelimit.hits_addend
metadata,another filter can be used to set the value to reflect their intent there,
for example, by using Lua or Ext Proc filters.
This use case arises from the LLM API services which usually return
the usage statistics in the response body. More specifically,
they have "streaming" APIs whose response is a line-by-line event
stream where the very last line of the response line contains the
usage statistics. The lazy nature of this action is perfectly fine
as in these use cases, the rate limit happens like "you are forbidden
from the next time".
Besides the LLM specific, I've also encountered the use case from the
data center resource allocation case where the operators want to
"block the computation from the next time since you used this much
resources in this request".
Ref: envoyproxy/gateway#4756
Risk Level: low
Testing: done
Docs Changes: done
Release Notes: TODO
Platform Specific Features: n/a