-
Notifications
You must be signed in to change notification settings - Fork 393
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support benchmarking of multiple HTTP(S) endpoints #76
base: master
Are you sure you want to change the base?
Support benchmarking of multiple HTTP(S) endpoints #76
Conversation
… be closed and re-opened
When EOF gets received, it should reconnect the socket without increasing error counter. Otherwise we could see socket read errors even in case of ordinary reconnects.
This change forces a reconnect of all connections of a thread when wrk.thread.addr is set from a LUA script. wrk.thread.addr has always been writeable from LUA, but the actual socket connection was never updated in wrk'c C code. This change enables LUA scripts to connect to multiple servers, extending the feature set of wrk. Signed-off-by: Thilo Fromm <[email protected]>
Re-connect when peer closes the connection
Signed-off-by: Thilo Fromm <[email protected]>
Signed-off-by: Thilo Fromm <[email protected]>
Signed-off-by: Thilo Fromm <[email protected]>
…ddr-is-set LUA API: force reconnect when wrk.thread.addr is set
Signed-off-by: Thilo Fromm <[email protected]>
This change makes multi-endpoint support more generic, with the motivation of making this feature useful for upstream. The LUA script 'multiple-endpoints.lua' allows for specifying an arbitrary number of HTTP(S) endpoints to include in the benchmark. Endpoints will be connected to in a random, evenly distributed fashion. After a run finished, the overall latency will be reported (i.e. there's currently no break-down per endpoint). The main purpose of running a benchmark over multiple endpoints is to allow benchmarking of e.g. a whole web application instead of the pages and/or restful resources that make up said application individually. Signed-off-by: Thilo Fromm <[email protected]>
Example usage:
|
Let's open an issue for this to discuss before we pull it in... One of my main concerns is "forking" too far from the place we originally forked wrk at, which would make catching up with wrk itself harder. And since I (personally) have not really tracked how wrk has evolved from that point, I don't know how this PR relates to features there.
|
Happily opening an issue to discuss if that's the preferred path - I did not see much of a benefit over discussing right here, on the PR, so I did not cut an issue right away. Regarding upstream, I'd argue that the feature introduced by this PR makes a lot more sense in the context of benchmarking with constant RPS - something upstream does not support. The main scenario we were aiming at when writing this code was to simulate constant RPS load on a cloud-native (i.e. clustered) web app (consisting of multiple micro-services with multiple URLs each), so basing our PR on That said, I think I better understand the main concern of not diverging from upstream too much. Let me look into the latest upstream changes, with the goal of produing a PR to update this fork, before we continue discussing this PR. |
…ti-endpoint-support
This PR adds support for specifying, and for benchmarking,
multiple HTTP(S) endpoints in a single wrk2 run.
Our main motivation of running a benchmark over multiple endpoints
is to allow benchmarking of e.g. a whole web application instead
of the pages and/or restful resources that make up said
application individually.
Most of the heavy lifting is done in a LUA script,
multiple-endpoints.lua
The script allows for specifying an arbitrary number of HTTP(S) endpoints
to include in the benchmark. Endpoints will be connected to in a random, evenly
distributed fashion. After a run finished, the overall latency will be reported
(i.e. there's currently no break-down of latency per endpoint).
Furthermore, this PR introduces a change in wrk.c that will force a thread
to reconnect (i.e. close socket / open socket using current value of
wrk.thread.addr
) each timewrk.thread.addr
is set from a LUA script.Lastly, the PR includes a patch by @janmejay to handle remote connection
close. @dongsupark identified this issue during our testing.
Known Limitations Please note that currently, benchmarking multiple endpoints requires threads == connections, as we close & reconnect as soon as a thread assigns
wrk.thread.addr
, which impedes ongoing async requests. There are a number of ways to remove this limitation; and we are actively investigating. However, we'd like to start getting early feedback on our direction, hence moved to create this PR with a known limitation.