A Python 3 tool for automatically contracting tensor networks for weighted model counting on multiple CPUs and on a GPU.
Because of the variety of dependencies used in the various graph decomposition tools, it is recommended to use the docker container to run TensorOrder.
The docker container (for singlecore and multi-core) can be built with the following commands:
docker build --tag tensororder .
In order to leverage a GPU, you must compile the (larger) docker container with TensorFlow and gpu drivers:
docker build --tag tensororder-gpu -f Dockerfile-gpu .
Once built, docker containers can be used as follows to run TensorOrder:
docker run -i tensororder:latest python /src/tensororder.py --planner="line-Flow" --weights="unweighted" < "benchmarks/cubic_vertex_cover/cubic_vc_50_0.cnf"
By default, this runs the tensor network contraction on all available CPU cores with numpy. One can also choose to use the GPU to perform the contraction. This requires nvidia-container-runtime to be installed.
docker run -i --gpus all tensororder-gpu:latest python /src/tensororder.py --planner="line-Flow" --weights="unweighted" --tensor_library="tensorflow-gpu" < "benchmarks/cubic_vertex_cover/cubic_vc_50_0.cnf"
It is also possible to connect to a TPU on Google Cloud to perform the tensor contractions:
docker run -i tensororder:latest python /src/tensororder.py --timeout="1000" --planner="line-Flow" --weights="unweighted" --tensor_library="jax-tpu" --entry_type="float32" --tpu="10.6.165.2" < "benchmarks/cubic_vertex_cover/cubic_vc_50_0.cnf"
Both docker containers are compatible with Turbine to run experiments on Google Cloud.
There is also a Singularity container available for TensorOrder.
The Singularity container can be built with the following commands:
sudo singularity build tensororder Singularity
Once built, Singularity containers can be used as follows to run TensorOrder:
./tensororder --method="line-Flow" < "benchmarks/cubic_vertex_cover/cubic_vc_50_0.cnf"
TensorOrder can also be used directly as a Python 3 tool. Since TensorOrder uses Cython, it must be compiled:
make -C src
Moreover, the various tensor methods each require additional setup. Consult the Docker file for an example set of installation commands.
- For KCMR-metis and KCMR-gn, METIS must be installed using the instructions here.
- For line-Tamaki and factor-Tamaki, the tree-decomposition solver Tamaki must be compiled using the
heuristic
instructions here. - For line-Flow and factor-Flow, the tree-decomposition solver FlowCutter must be compiled using the instructions here.
- For line-htd and factor-htd, the tree-decomposition solver htd must be compiled using the instructions here.
- For factor-hicks, the branch-decomposition solver Hicks must be compiled using the Makefile here.
- For line-portfolio3 and line-portfolio3, all tree-decompositions solvers must be compiled, and the portfolio must be compiled using the instructions here.
Once everything has been built, the primary script is located in src/tensororder.py
. Example usage is
python src/tensororder.py --method="line-Flow" < "benchmarks/cubic_vertex_cover/cubic_vc_50_0.cnf"
TensorOrder requires the following python packages (see requirements.txt for a working set of exact version information if needed):
click
numpy
python-igraph
networkx
cython
threadpoolctl
tensorflow
(optional)
Please cite the following article if you use our code in a publication:
- Parallel Weighted Model Counting with Tensor Networks. Jeffrey M. Dudek and Moshe Y. Vardi. Proceedings of MCW'20.