This repository has been archived by the owner on Aug 14, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 15
Add visualization summary of benchmark output #9
Comments
mtreinish
pushed a commit
that referenced
this issue
Aug 10, 2022
* Initial commit: Added CI Job to test new benchmarks * Added test file (copy of run_bv.py) * Added Matthew's suggestions (#4) * Initial commit for ci tests * Change bash script * Bash script does not need cat * Added second test file to test * Sugar is unnecessary * Added test in mapping * Mapping takes way longer. Will develop further. * Delete tests.txt Deleted old residuals, ready to merge with ci branch * Added upload artifact action (#6) * Another test for CI * Organized CI actions. * Added both the text file and the original results as artifacts * Added more aesthetic changes suggested by Matthew (#9) * Added more changes suggested by Matthew * Added proper management of __init__.py files (#11) * Another test for CI * Organized CI actions. * Added both the text file and the original results as artifacts * Added more changes suggested by matthew * Added method to handle changes to __init__.py files * Fixed error in main.yml file * Fixing another error in yaml file * Fixing scope error in yaml * Updated changed-files 24 -> 24.1 * Attempt 1 * Attempt 2 * Attempt 3 * Worked. Fixed typo... Now reordering. * Made a change in init file, let's see if it detects * Fixed bash code * Checking for something suspicious * Checking for something suspicious * Fixing something suspicious * Fixed? * Fixed?? * Test with only __init__ file * Final commit before merging * Improved Runtime by removing Washington, Brooklyn and Rochester backends. (#13) * Remove usage of Washington, Brooklyn and Rockester backends due to slow runtime. * What if there are no changes? * Handle no files in a healthy way * No changes works! Going back to adding changes. * Created custom action to test each specific benchmark (#15) * Try customized action * Added custom acrtion to workflow. * Fixed typo in workflow. * Modified action location * Modified action location 2 * Modified action location 3 * Modified action location 4 * Fixed path for testing * Fixed typo in line 22 of action. * Housekeeping, and red-added exclusions. * Final commit, custom action is ready to be merged. * Final fixes before merging (#17) * Final fixes before merging * Added modifications to CONTRIBUTING.md * Added last modification to CONTRIBUTING.md * Delete test.py for final version
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The table-based output from req-queen is currently a bit unwieldy, especially as the number of benchmarks, compilers, versions and platforms grow. We should find a way to quickly visualize and compare results generated by red-queen across the above dimensions. There are many prior examples of this being done for other languages or other compilers, e.g.:
https://perf.rust-lang.org/index.html
https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/rust-gpp.html
Also, the MetriQ platform has a section focused on quantum compilers ( https://metriq.info/Task/57 ) which also allows for comparisons across compilers and metrics, to which we could submit red-queen results.
The text was updated successfully, but these errors were encountered: