-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate best practices for benchmarking comparisons between CPU and GPU #2
Comments
Hi all! Among the research paper and blogs I read, I found pyhpc-benchmarks, a GitHub repository, is one of the most feasible resources that we can use to complete this project. pyhpc-benchmarks is
It constructs benchmark suites for different applications over various python backends, such as NumPy, JAX, PyTorch, TensorFlow, etc, using statistical metrics, like mean, stdev, min, max, median. When testing the performance of CPU and GPU, we can control variables such as data size, number of iteration, number of threads (in terms of muti-thread programming). I need to have a deeper understanding of pyhf, thus helping me to generate a method to build pyhf benchmark suite. @matthewfeickert Above is my current thought of benchmarking comparisons between CPU and GPU. I will update my thoughts in the future. Welcome anyone corrects my thoughts! |
Can you share some of the more interesting ones here as references?
Maybe you can try to determine if this is something useful just by looking at CPU tests for the time being. I see that they also have examples of using this to benchmark on Google Collab GPUs so while the group GPU machine is getting setup to support all backends you could do preliminary tests on Collab. While they advocate for using Conda environments, we don't want to be restricted to being forced to use Conda. My first impressions is that as
and we really care about having a tool to quickly test the performance of the backends on different workspaces, |
At first, I read Computing Performance Benchmarks among CPU, GPU, and FPGA. Among the benchmark suites it introduces, I dive deep into Rodinia Suite and Parboil Suite. From the paper, both of the benchmark suites have applications over different applications, but currently, I don't find an approach to use Rodinia Suite and Parboil Suite directly in our project. Among the paper, I find some metrics they used in experiments: time consumption, throughput, kernel execution, memory used, CPU-GPU communication time, etc. Time consumption is the most common and direct way to show the performance of GPU and CPU. These papers do not provide methods of how to measure all of these metrics. I will do more search when I start to implement the measurement of related metrics. Other useful paper: Wiki Link: Benchmark As for the blogs I read, I find most of them are related to current benchmark software, such as blog1, which I think are not useful to our project. If anyone can absorb more useful information from the links I mentioned or have any more useful information, welcome to share and discuss! |
I find another python package
The Useful links: |
There may be existing literature/blogs/documents on best practices for benchmarking comparisons between CPU and GPU. It would be good to investigate this and learn about what has already been done in this space.
The text was updated successfully, but these errors were encountered: