Replies: 2 comments 17 replies
-
I think having a local script called The GitHub action looks promising. From what I read it gives a high level view (e.g. mean time taken). It could be good for detecting performance regressions. I think the local script should be more detailed, so we know where the time is being spent, so cprofile, viztracer, pyinstrument etc might be better for this. |
Beta Was this translation helpful? Give feedback.
-
Yes, the local profiler would be very useful, but we still have the problem of the #451, where we can't thoughtfully check the performance in other platforms than our local. This could be solved by sharing the profiler results, I think. |
Beta Was this translation helpful? Give feedback.
-
Hey there!
I want to share some of my ideas for further discussion.
Profiling:
We have multiple options:
They all have an option to generate an interactive webpage from results, and we can serve it with Github pages.
We can have a different repository for this or serve the results in the original docs.
Also, we can have a
profile.sh
script for easy local profiling.Benchmarking:
I found this action for this purpose.
The good thing about it is that we can see the performance regressions and progressions as a chart of commits to the master.
This can also have a script for itself.
The common part of profiling and benchmarking is having a python module with all the performance tests and different queries.
And we need to populate a database with a good amount of data for realistic results before testing.
The problem with Github runners is they are shared resources, and the benchmark results will vary a lot. So I don't know how much it makes sense to have workflows for this. But profiling workflow for different platforms could be helpful.
I would appreciate your ideas about this matter to create a task list at the end.
Beta Was this translation helpful? Give feedback.
All reactions