This contains all of the code used to generate JSON benchmarks and graphs for ACORNS: An easy-to-use Code Generator for Gradients and Hessians
Please refer to the following repo for the package: https://github.com/deshanadesai/acorns
This is NOT the repository for the acorns library. And is not maintained. This repository only contains code for the benchmarks for comparisons with other methods.
In order to run the benchmarking you need Pytorch installed, Adept installed, Java1.8 or higher and GCC9.
- To intall PyTorch simply using pip you can simply type
pip install torch
- To install Adept follow the instructions found here
- NOTE: Adept has been installed ONLY with
./configure
and the scripts rely on the location of the binaries.
- NOTE: Adept has been installed ONLY with
- On a mac you can find a working version of GCC9 with Homebrew and install it with
brew install gcc@9
- In order to graph you must have
matplotlib
installed which can be installed through pip.
This hosts all of the code used to generate the data and graphs for ACORNS There are 5 main functions that generate the data:
tests/updated_test_suite.py
- This runs ACORNS against PyTorch, Adept, Mitsuba, Enoki and Tapenade with respect to 2 hardcoded functions:
((k*k+3*k)-k/4)/k+k*k*k*k+k*k*(22/7*k)+k*k*k*k*k*k*k*k*k
sin(k) + cos(k) + pow(k, 2)
this returns the gradient of all of them and outputs the results in./tests/results/grad/full_results-{TIMESTAMP}.json
tests/updated_test_suite_random.py
This runs ACORNS for the gradient against PyTorch, Adept, Mitsuba, Enoki and Tapenade with respect to 10 different functions, each one having increasing variables (from 1 to 10) this returns the gradient of all of them and outputs the results in./tests/results/grad/full_results_random-{TIMESTAMP}.json
tests/updated_hessian_test_suite.py
This runs ACORNS for the hessian against Mitsuba, PyTorch and Tapenade for the functions given in (1) this returns the gradient of all of them and outputs the results in./tests/results/hess/full_results_hessian-{TIMESTAMP}.json
tests/updated_hessian_test_suite_random.py
This runs ACORNS for the hessian against Mitsuba, PyTorch and Tapenade with respect to 10 different functions, each one having increasing variables (from 1 to 10) this returns the gradient of all of them and outputs the results in./tests/results/hess/full_results_hessian-{TIMESTAMP}.json
tests/parallel_test_suite.py
This runs ACORNS with OpenMP across threads increasing from 1 to 25 this returns the gradient of all of them and outputs the results in./tests/results/hess/full_results_parallel-{TIMESTAMP}.json
We graph our runtimes against all of the competitors in ./tests/graph/
folder. Right now these files are hardcoded to accept JSON from our latest runs which were output from the scritps in Benchmarking
- To graph Gradient Hardcoded run
./tests/graph/gradient_non_random_graphs-g++9.py
- To graph Gradient Random run
./tests/graph/gradient_random_graphs-g++9.py
- To graph Hessian Hardcoded run
./tests/graph/hessian_non_random_graphs-g++9.py
- To graph Hessian Hardcoded run
./tests/graph/hessian_random_graphs-g++9.py
- To graph how ACORNS scales with respect to threads run
./tests/graph/parallel_graphs.py
In our paper we outline how we used this in a PolyFEM application. This was very complex and the C code was too big to include here. As such we simple calculated the times in JSON which can be found in the ./tests/complex/data
- To graph file generation time run
./tests/complex/graph_file_gen.py
- To graph file sizes run
./tests/complex/graph_file_sizes.py
- To graph how PolyFEM runs with respect to file splitting run
./tests/complex/graph/graph_runs.py