-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Devise a test harness for testing performance #61
Comments
I'll look into this today and tomorrow. |
Wasn't able to look into this. Feel free to unsassign |
How is the impact of the performance impact of changes defined? Is it meant to be a time and space benchmark against a predefined use? |
My understanding is that we want to have something similar to the CodeCov integration, where we can easily tell, for any given PR, if a number of predefined measurements differ when measured on main vs on the PR branch. This way, we can both have a better understanding on the current time and space performance of PyStack, and be aware if we are significantly changing them with a PR. Any PR whose goal is to change performance would be close to meaningless without a way of measuring the difference. |
Hi Gus, I implemented it in the PR #165. Please have a look |
We know there's low hanging fruit available for optimizing PyStack, but we currently have no good way to benchmark our performance and quantify any improvements. Design some sort of a test harness that can be used for measuring the performance impact of our changes, possibly using https://asv.readthedocs.io/en/stable/
The text was updated successfully, but these errors were encountered: