diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..50464a9 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,2 @@ +solvers/* linguist-vendored +src/tensorcsp/* linguist-vendored diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..7c16d58 --- /dev/null +++ b/.gitignore @@ -0,0 +1,130 @@ +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*$py.class + +# C extensions +*.so + +# Distribution / packaging +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +.hypothesis/ +.pytest_cache/ + +# Translations +*.mo +*.pot + +# Django stuff: +*.log +local_settings.py +db.sqlite3 + +# Flask stuff: +instance/ +.webassets-cache + +# Scrapy stuff: +.scrapy + +# Sphinx documentation +docs/_build/ + +# PyBuilder +target/ + +# Jupyter Notebook +.ipynb_checkpoints + +# pyenv +.python-version + +# celery beat schedule file +celerybeat-schedule + +# SageMath parsed files +*.sage.py + +# Environments +.env +.venv +env/ +venv/ +ENV/ +env.bak/ +venv.bak/ + +# Spyder project settings +.spyderproject +.spyproject + +# Rope project settings +.ropeproject + +# mkdocs documentation +/site + +# mypy +.mypy_cache/ + +# CUSTOM Exclusions +# IPython notebooks +*.ipynb +/.idea/* + +# Compressed Benchmarks +*.zip + +# Local experiment data +/experiments/* +# Local figure data +/figures/* + +# C++ object files +/cpp/*.o +/cpp/*/*.o +/cpp/*/*/*.o +/cpp/*/*/*/*.o + +# VSCode files +/cpp/.vscode/* +/cpp/*.code-workspace + +# Singularity container +tensororder diff --git a/LICENSE b/LICENSE index 3d2c79f..9bf2baf 100644 --- a/LICENSE +++ b/LICENSE @@ -1,21 +1,21 @@ -MIT License - -Copyright (c) 2019 Vardi's Group - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. +MIT License + +Copyright (c) 2019 Vardi's Group + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/Makefile b/Makefile new file mode 100644 index 0000000..8cafb82 --- /dev/null +++ b/Makefile @@ -0,0 +1,7 @@ +export TENSORORDER_DIR=$(dir $(abspath $(lastword $(MAKEFILE_LIST)))) + +tensororder: Singularity + singularity build tensororder Singularity + +clean: + rm tensororder diff --git a/README.md b/README.md index ce971c0..556748c 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,37 @@ -# TensorOrder -A tool for weighted model counting through tensor network contraction +# TensorOrder +A Python 3 tool for automatically contracting tensor networks for weighted model counting. + +## Running with Singularity +Because of the variety of dependencies used in the various tree decomposition tools, it is recommended to use the [Singularity](https://www.sylabs.io/) container to run TensorOrder. + +### Building the container +The container can be built with the following commands (make requires root to build the Singularity container): +``` +git clone https://github.com/vardigroup/TensorOrder.git +cd TensorOrder +sudo make +``` + +### Usage +Once built, example usage is: +``` +./tensororder --method="line-Flow" < "benchmarks/cubic_vertex_cover/cubic_vc_50_0.cnf" +``` + + +## Running without Singularity +TensorOrder can also be used directly as a Python 3 tool. The primary script is located in `src/tensororder.py`. Example usage is +```python src/tensororder.py --method="line-Flow" < "benchmarks/cubic_vertex_cover/cubic_vc_50_0.cnf" ``` + +TensorOrder requires the following python packages (see [requirements.txt](requirements.txt) for a working set of exact version information if needed): +1. `click` +2. `numpy` +3. `python-igraph` +4. `networkx` +5. `cached_property` + +Moreover, the various tensor methods each require additional setup. +* For KCMR-metis and KCMR-gn, METIS must be installed using the instructions [here](src/tensorcsp). +* For line-Tamaki and factor-Tamaki, the tree-decomposition solver Tamaki must be compiled using the `heuristic` instructions [here](solvers/TCS-Meiji). +* For line-Flow and factor-Flow, the tree-decomposition solver FlowCutter must be compiled using the instructions [here](solvers/flow-cutter-pace17). +* For line-htd and factor-htd, the tree-decomposition solver htd must be compiled using the instructions [here](solvers/htd-master). \ No newline at end of file diff --git a/Singularity b/Singularity new file mode 100644 index 0000000..9944c65 --- /dev/null +++ b/Singularity @@ -0,0 +1,57 @@ +Bootstrap: docker +From: python:3.7-slim + +%setup + cp -R ${TENSORORDER_DIR-$PWD}/solvers ${SINGULARITY_ROOTFS}/solvers + cp -R ${TENSORORDER_DIR-$PWD}/src ${SINGULARITY_ROOTFS}/src + wget http://glaros.dtc.umn.edu/gkhome/fetch/sw/metis/metis-5.1.0.tar.gz -P ${SINGULARITY_ROOTFS}/solvers/ + +%post + apt-get update + + # TensorOrder + apt-get -y install g++ make libxml2-dev zlib1g-dev + pip install click numpy python-igraph networkx cached_property + + # METIS + apt-get -y install g++ make cmake + cd /solvers/ + tar -xvf metis-5.1.0.tar.gz + rm metis-5.1.0.tar.gz + cd /solvers/metis-5.1.0 + make config shared=1 + make + make install + pip install metis + + # TCS-Meiji + # deal with slim variants not having man page directories (which causes "update-alternatives" to fail) + mkdir -p /usr/share/man/man1 + apt-get install -y make openjdk-11-jdk + cd /solvers/TCS-Meiji + make heuristic + + # FlowCutter + apt-get -y install g++ + cd /solvers/flow-cutter-pace17 + chmod +x ./build.sh + ./build.sh + + # Htd + apt-get -y install g++ cmake + cd /solvers/htd-master + cmake . + make + +%environment + export METIS_DLL=/solvers/metis-5.1.0/build/Linux-x86_64/libmetis/libmetis.so + +%help + This is a Singularity container for the TensorOrder tool. + See "$SINGULARITY_NAME --help" for usage. + +%runscript + export TENSORORDER_CALLER="$SINGULARITY_NAME" + exec python /src/tensororder.py "$@" + + diff --git a/benchmarks/cubic_vertex_cover/cubic_vc_50_0.cnf b/benchmarks/cubic_vertex_cover/cubic_vc_50_0.cnf new file mode 100644 index 0000000..b39d163 --- /dev/null +++ b/benchmarks/cubic_vertex_cover/cubic_vc_50_0.cnf @@ -0,0 +1,76 @@ +p cnf 50 75 +1 26 0 +1 2 0 +1 11 0 +2 4 0 +2 48 0 +3 32 0 +3 19 0 +3 23 0 +4 13 0 +4 41 0 +5 13 0 +5 34 0 +5 17 0 +6 32 0 +6 7 0 +6 42 0 +7 46 0 +7 10 0 +8 38 0 +8 37 0 +8 30 0 +9 21 0 +9 38 0 +9 10 0 +10 27 0 +11 21 0 +11 20 0 +12 23 0 +12 15 0 +12 36 0 +13 44 0 +14 45 0 +14 24 0 +14 40 0 +15 35 0 +15 20 0 +16 36 0 +16 48 0 +16 27 0 +17 45 0 +17 39 0 +18 26 0 +18 39 0 +18 50 0 +19 50 0 +19 22 0 +20 22 0 +21 36 0 +22 47 0 +23 46 0 +24 43 0 +24 31 0 +25 28 0 +25 43 0 +25 30 0 +26 42 0 +27 29 0 +28 31 0 +28 47 0 +29 49 0 +29 48 0 +30 34 0 +31 47 0 +32 35 0 +33 35 0 +33 49 0 +33 37 0 +34 44 0 +37 50 0 +38 46 0 +39 43 0 +40 44 0 +40 41 0 +41 42 0 +45 49 0 diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000..e9e110b --- /dev/null +++ b/requirements.txt @@ -0,0 +1,12 @@ +cached-property==1.5.1 +certifi==2018.4.16 +Click==7.0 +decorator==4.3.0 +llvmlite==0.26.0 +metis==0.2a4 +networkx==2.1 +numba==0.41.0 +numpy==1.15.0 +python-igraph==0.7.1.post6 +scipy==1.1.0 +sparse==0.5.0+16.ge80efc7.dirty diff --git a/solvers/TCS-Meiji/LICENSE b/solvers/TCS-Meiji/LICENSE new file mode 100644 index 0000000..4aed0b4 --- /dev/null +++ b/solvers/TCS-Meiji/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2017 Hisao Tamaki, Hiromu Ohtsuka, Takuto Sato, Keitaro Makii + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/solvers/TCS-Meiji/Makefile b/solvers/TCS-Meiji/Makefile new file mode 100644 index 0000000..5075c35 --- /dev/null +++ b/solvers/TCS-Meiji/Makefile @@ -0,0 +1,10 @@ +all: exact heuristic + +exact: + javac tw/exact/*.java + +heuristic: + javac tw/heuristic/*.java + +clean: + rm tw/*/*.class \ No newline at end of file diff --git a/solvers/TCS-Meiji/README b/solvers/TCS-Meiji/README new file mode 100644 index 0000000..9f6c4ea --- /dev/null +++ b/solvers/TCS-Meiji/README @@ -0,0 +1,38 @@ +This repository is primarily for PACE 2017 Track A submissions. + +This repository contains both exact and heuristic submissions. + +The exact algorithm implements the algorithm proposed in: +Tamaki, Hisao. "Positive-instance driven dynamic programming for treewidth." +arXiv preprint arXiv:1704.05286 (2017). + +The heuristic algorithm starts with a greedy solution +and tries to improve the current solution +through local improvements. It looks at a +subtree of the current tree-decomposition around the largest bag and +runs the following decomposition algorithms on the +subgraph corresponding to this subtree in a round-robin manner: +the exact treewidth algorithm, an exact pathwidth algorithm, +and a heuristic (greedy) heuristic algorithm. + +The final team members +Exact: Hisao Tamaki and Hiromu Ohtsuka +Heuristic: Hisao Tamaki, Hiromu Ohtsuka, Takuto Sato and Keitaro Makii + +If you use the implementation provided in this repository in research work, +please cite the above paper and/or this repository in your publication reporting +the work. + +Usage: +$ make exact +for making the exact submission +$ make heuristic +for making the heuristic submission, or +$ make +for making both + +The commands are tw-exact and tw-heuristic as specified by the challenge rule. +These commands are implemented as shell scripts. + + + diff --git a/solvers/TCS-Meiji/batch b/solvers/TCS-Meiji/batch new file mode 100644 index 0000000..246be3f --- /dev/null +++ b/solvers/TCS-Meiji/batch @@ -0,0 +1,10 @@ +#!/bin/sh +# +for infile in `ls test_instance` +do + file=${infile%.*} + outfile="$file.td" + echo $file + ./tw-exact2 < "test_instance/$infile" > "test_result/$outfile" |\ + tr -d '\r' +done \ No newline at end of file diff --git a/solvers/TCS-Meiji/td-validate-master/LICENSE b/solvers/TCS-Meiji/td-validate-master/LICENSE new file mode 100644 index 0000000..8ef8eb8 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2016 Holger Dell + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/solvers/TCS-Meiji/td-validate-master/Makefile b/solvers/TCS-Meiji/td-validate-master/Makefile new file mode 100644 index 0000000..9225ed7 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/Makefile @@ -0,0 +1,21 @@ +CXXFLAGS=-std=c++11 -O3 -march=native -Wall -Wextra + +.PHONY: all +all: td-validate + +.PHONY: test +test: td-validate + ./test_td-validate.sh + +.PHONY: clean +clean: + rm -f td-validate + rm -f td-validate.cpp.gcov td-validate.gcno td-validate.gcda + +# Use this to determine code coverage for automated tests +.PHONY: coverage +coverage: CXXFLAGS += -O0 -fprofile-arcs -ftest-coverage +coverage: td-validate.cpp.gcov + +td-validate.cpp.gcov: test + gcov -r td-validate.cpp diff --git a/solvers/TCS-Meiji/td-validate-master/README.md b/solvers/TCS-Meiji/td-validate-master/README.md new file mode 100644 index 0000000..e06c6e3 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/README.md @@ -0,0 +1,30 @@ +# Tree decomposition validity checker + +This software verifies that a given, purported tree decomposition is a correct tree decomposition of a given graph. It is part of the first Parameterized Algorithms and Computational Experiments Challenge ([PACE 2016](https://pacechallenge.wordpress.com/track-a-treewidth/)). + +## Usage + +Check that input.gr and input.td correctly encode a graph and one of its tree decompositions: +``` +./td-validate input.gr input.td +``` + +Check that input.gr correctly encodes a graph: +``` +./td-validate input.gr +``` + +## Build + +Run `make` and, optionally, `make test`. + +## Credits + +- Cornelius Brand +- Holger Dell +- Lukas Larisch +- Felix Salfelder + +## License + +This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details diff --git a/solvers/TCS-Meiji/td-validate-master/autotest-tw-solver.py b/solvers/TCS-Meiji/td-validate-master/autotest-tw-solver.py new file mode 100644 index 0000000..7d88e6b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/autotest-tw-solver.py @@ -0,0 +1,185 @@ +#!/usr/bin/env python3 + +''' +Automatically test a given treewidth solver: + ./autotest-tw-solver.py path/to/my/solver + +The test is run on some corner cases, and on graphs were past PACE submissions +exhibited bugs. + +Optional argument: + --full run test on all graphs with min-degree 3 and at most 8 vertices + +Requires python3-networkx + +Copyright 2016, Holger Dell +Licensed under GPLv3. +''' + +import os +import subprocess +import threading +import glob +import tempfile +import signal +import argparse +import networkx + + + +def read_tw_from_td(ifstream): + '''Return the reported treewidth from a .td file''' + for line in ifstream: + if line[0] == 's': + treewidth=int(line.split(' ')[3]) - 1 + return treewidth + + +def test_case_generator(full=False): + ''' + Return a generator for all test cases. + + Each test case is a tuple (name, grfilestream, treewidth) + where + - name is a string indicating the name of the test case + - grfilestream is a stream from which the grfile can be read + - treewidth is the known treewidth of the graph (or None if we don't care) + ''' + + # This covers some corner cases (comments, empty graphs, etc) + for grfile in glob.glob('test/valid/*.gr'): + yield grfile,open(grfile,'r'),None + + # Test cases where some tw-solvers were buggy in the past + for grfile in glob.glob('test/tw-solver-bugs/*.gr'): + treewidth = None + with open(grfile[:-3] + '.td') as td_stream: + treewidth = read_tw_from_td(td_stream) + yield grfile,open(grfile,'r'),treewidth + + # More test cases where some tw-solvers were buggy in the past + tests=[ 'test/tw-solver-bugs.graph6' ] + + if full: + tests.append('test/n_upto_8.graph6') + + for fname in tests: + with open(fname) as tests: + for line in tests: + line = line.strip().split(' ') + graph6 = line[0] + treewidth = int(line[1]) + + G = networkx.parse_graph6(graph6) + n = G.order() + m = G.size() + + with tempfile.TemporaryFile('w+') as tmp: + tmp.write("p tw {:d} {:d}\n".format(n,m)) + for (u,v) in G.edges(data=False): + tmp.write("{:d} {:d}\n".format(u+1,v+1)) + tmp.flush() + tmp.seek(0) + yield graph6 + ' from ' + fname,tmp,treewidth + + +tw_executable = '' +FNULL = open(os.devnull, 'w') + +def td_validate(grstream, tdstream): + with tempfile.NamedTemporaryFile('w+') as tmp_td: + for line in tdstream: + tmp_td.write(line) + tmp_td.flush() + tmp_td.seek(0) + with tempfile.NamedTemporaryFile('w+') as tmp_gr: + for line in grstream: + tmp_gr.write(line) + tmp_gr.flush() + tmp_gr.seek(0) + + p = subprocess.Popen(['./td-validate',tmp_gr.name,tmp_td.name]) + p.wait() + return p.returncode == 0 + + +def run_one_testcase(arg): + '''given the name of a testcase, the input stream for a .gr file, and the + correct treewidth, this function runs the test''' + + global tw_executable + name, ifstream, treewidth = arg + + with tempfile.TemporaryFile('w+') as tmp_td: + p = subprocess.Popen([tw_executable], + stdin=ifstream, + stdout=tmp_td, + stderr=FNULL) + + try: + p.wait(timeout=5) + except subprocess.TimeoutExpired: + p.terminate() + try: + p.wait(timeout=5) + except subprocess.TimeoutExpired: + p.kill() + + ifstream.seek(0) + tmp_td.flush() + tmp_td.seek(0) + print(name) + valid = td_validate(ifstream, tmp_td) + ifstream.close() + + tmp_td.seek(0) + computed_tw = read_tw_from_td(tmp_td) + + if treewidth != None and computed_tw != None: + if treewidth > computed_tw: + print('!! your program said tw={:d} but we thought it was {:d} -- please send your .td file to the developer of td-validate'.format(computed_tw,treewidth)) + elif treewidth < computed_tw: + print("non-optimal (your_tw={:d}, optimal_tw={:d})".format(computed_tw,treewidth)) + nonoptimal = treewidth != None and computed_tw != None and treewidth < computed_tw + print() + return valid,nonoptimal + +def main(): + parser = argparse.ArgumentParser(description='Automatically test a given treewidth solver') + parser.add_argument("twsolver", help="path to the treewidth solver you want to test") + parser.add_argument("--full", help="run test on all 2753 graphs with min-degree 3 and at most 8 vertices (this could take a while)", + action='store_true') + + args = parser.parse_args() + + global tw_executable + tw_executable = args.twsolver + + f='./td-validate' + if not os.path.isfile(f): + print("File {:s} not found. Run 'make' first!\n".format(f)) + return + + print("Automatically testing {:s}...\n".format(tw_executable)) + + results = list(map(run_one_testcase, test_case_generator(args.full))) + + total=len(results) + total_valid = 0 + total_nonoptimal = 0 + for valid,nonoptimal in results: + if valid: total_valid += 1 + if nonoptimal: total_nonoptimal += 1 + + print() + if total == total_valid: + print('Produced a valid .td on all {:d} instances.'.format(total)) + else: + print('{:d} out of {:d} tests produced a valid .td'.format(total_valid,total)) + if total_nonoptimal == 0: + print('All tree decompositions were optimal') + else: + print('{:d} tree decompositions were not optimal'.format(total_nonoptimal)) + +if __name__ == '__main__': + main() diff --git a/solvers/TCS-Meiji/td-validate-master/td-validate.cpp b/solvers/TCS-Meiji/td-validate-master/td-validate.cpp new file mode 100644 index 0000000..56f1421 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/td-validate.cpp @@ -0,0 +1,797 @@ +#include +#include +#include +#include +#include +#include +#include + +/* + * If USE_VECTOR is set, use std::vector to store adjacency lists; this seems to + * be faster for the instances we observed, where the degrees tend to be small + */ +#define USE_VECTOR + +/* + * An unsigned variant of std::stoi that also checks if the input consists + * entirely of base-10 digits + */ +unsigned pure_stou(const std::string& s) { + if(s.empty() || s.find_first_not_of("0123456789") != std::string::npos) { + throw std::invalid_argument("Non-numeric entry '" + s + "'"); + } + unsigned long result = std::stoul(s); + if (result > std::numeric_limits::max()) { + throw std::out_of_range("stou"); + } + return result; +} + +#ifdef USE_VECTOR +/* + * The most basic graph structure you could imagine, in adjacency list + * representation. This is not supposed to be any kind of general-purpose graph + * class, but only implements what we need for our computations. E.g., removing + * vertices is not supported. + */ +struct graph { + std::vector > adj_list; + unsigned num_vertices = 0; + unsigned num_edges = 0; + + /* + * Adds a vertex to the vertex set and returns its index. + * Vertices in this class are indexed starting with 0 (!). + * This is important because the vertices in the file format are not. + */ + unsigned add_vertex() { + adj_list.push_back(std::vector()); + return num_vertices++; + } + + /* + * The neighbors of a given vertex, where the index is, as usual, starting + * with 0. Does *not* include the vertex itself + */ + std::vector& neighbors(unsigned vertex_index) { + return adj_list[vertex_index]; + } + + /* + * Adds an undirected edge between two vertices, identified by their index; no + * checks are performed and bad indices can cause segfaults. + */ + void add_edge(unsigned u, unsigned v) { + adj_list.at(u).push_back(v); + adj_list.at(v).push_back(u); + num_edges++; + } + + /* + * Removes an undirected edge + */ + void remove_edge(unsigned u, unsigned v) { + bool end1 = false; + bool end2 = false; + auto v_it = std::find(adj_list.at(u).begin(), adj_list.at(u).end(), v); + if (v_it != adj_list.at(u).end()) { + adj_list.at(u).erase(v_it); + end1 = true; + } + + auto u_it = std::find(adj_list.at(v).begin(), adj_list.at(v).end(), u); + if (u_it != adj_list.at(v).end()) { + adj_list.at(v).erase(u_it); + end2 = true; + } + + if (end1 && end2) { + num_edges--; + } + } +}; +#else +struct graph { + std::vector > adj_list; + unsigned num_vertices = 0; + unsigned num_edges = 0; + + unsigned add_vertex() { + adj_list.push_back(std::set()); + return num_vertices++; + } + + std::set& neighbors(unsigned vertex_index) { + return adj_list[vertex_index]; + } + + void add_edge(unsigned u, unsigned v) { + adj_list[u].insert(v); + adj_list[v].insert(u); + num_edges++; + } + + void remove_edge(unsigned u, unsigned v) { + bool end1 = false; + bool end2 = false; + auto v_it = adj_list[u].find(v); + if (v_it != adj_list[u].end()) { + adj_list[u].erase(v_it); + end1 = true; + } + + auto u_it = adj_list[v].find(u); + if (u_it != adj_list[v].end()) { + adj_list[v].erase(u_it); + end2 = true; + } + + if (end1 && end2) { + num_edges--; + } + } +}; +#endif + +/* + * Type for a tree decomposition; i.e. a set of bags (vertices) together with + * adjacency lists on this set. As the graph class, this is highly minimalistic + * and implements all operations in such a way that our algorithms may work on + * it. + */ +struct tree_decomposition { + typedef std::set vertex_t; + std::vector bags; + std::vector> adj_list; + + /* + * The number of *relevant* bags currently in the tree; bags that were removed + * using remove_vertex and continue to exist empty and isolated are not + * counted. + */ + unsigned num_vertices = 0; + unsigned num_edges = 0; + + /* + * Adds a given bag to the vertex set of the decomposition and returns its + * index. Vertices in this class are indexed starting with 0 (!). This is + * important because the vertices in the file format are not. + */ + unsigned add_bag(vertex_t& bag) { + bags.push_back(bag); + adj_list.push_back(std::vector()); + return num_vertices++; + } + + /* + * See the graph class + */ + std::vector& neighbors(unsigned vertex_index) { + return adj_list.at(vertex_index); + } + + /* + * See the graph class + */ + void add_edge(unsigned u, unsigned v) { + adj_list.at(u).push_back(v); + adj_list.at(v).push_back(u); + num_edges++; + } + + /* + * See the graph class + */ + void remove_edge(unsigned u, unsigned v) { + auto v_it = std::find(adj_list.at(u).begin(),adj_list.at(u).end(),v); + auto u_it = std::find(adj_list.at(v).begin(),adj_list.at(v).end(),u); + if (v_it != adj_list.at(u).end() && u_it != adj_list.at(v).end()) { + adj_list.at(u).erase(v_it); + adj_list.at(v).erase(u_it); + num_edges--; + } + } + + /* + * Removes a vertex, in the following sense: The bag corresponding to the + * index is emptied and all adjacencies are removed, i.e., the bag will + * contain 0 vertices and have no incident edges. Nevertheless, the number of + * vertices is reduced (hence num_vertices--). + */ + void remove_vertex(unsigned u) { + bags.at(u).clear(); + std::vector remove; + for (auto it = adj_list.at(u).begin(); it != adj_list.at(u).end(); it++) { + remove.push_back(*it); + } + for (auto it = remove.begin(); it != remove.end(); it++) { + remove_edge(u,*it); + } + num_vertices--; + } + + /* + * Get the u-th bag + */ + vertex_t& get_bag(unsigned u) { + return bags.at(u); + } + + /* + * Checks if the given decomposition constitutes a tree using DFS + */ + bool is_tree() { + if ((num_vertices > 0) && num_vertices - 1 != num_edges) { + return false; + } else if (num_vertices == 0) { + return (num_edges == 0); + } + + std::vector seen(num_vertices, 0); + unsigned seen_size = 0; + + bool cycle = !tree_dfs(seen,0,-1,seen_size); + if (cycle || seen_size != num_vertices) { + return false; + } + return true; + } + + /* + * Helper method for is_tree(); not to be called from outside of the class + */ + bool tree_dfs(std::vector& seen, unsigned root, unsigned parent, unsigned& num_seen) { + if (seen[root] != 0) { + return false; + } + + seen[root] = 1; + num_seen++; + + for (auto it = adj_list[root].begin(); it != adj_list[root].end(); it++) { + if (*it != parent) { + if (!tree_dfs(seen, *it, root,num_seen)) { + return false; + } + } + } + return true; + } +}; + +/* + * The different states the syntax checker may find itself in while checking the + * syntax of a *.gr- or *.td-file. + */ +enum State { + COMMENT_SECTION, + S_LINE, + BAGS, + EDGES, + P_LINE +}; + +/* + * Messages associated with the respective exceptions to be thrown while + * checking the files + */ +const char* INV_FMT = "Invalid format"; +const char* INV_SOLN = "Invalid s-line"; +const char* INV_SOLN_BAGSIZE = "Invalid s-line: Reported bagsize and actual bagsize differ"; +const char* INV_EDGE = "Invalid edge"; +const char* INV_BAG = "Invalid bag"; +const char* INV_BAG_INDEX = "Invalid bag index"; +const char* INC_SOLN = "Inconsistent values in s-line"; +const char* NO_BAG_INDEX = "No bag index given"; +const char* BAG_MISSING = "Bag missing"; +const char* FILE_ERROR = "Could not open file"; +const char* EMPTY_LINE = "No empty lines allowed"; +const char* INV_PROB = "Invalid p-line"; + +/* + * The state the syntax checker is currently in; this variable is used for both + * checking the tree *and* the graph + */ +State current_state = COMMENT_SECTION; + +/* + * A collection of global variables that are relentlessly manipulated and read + * from different positions of the program. Initially, they are set while + * reading the input files. + */ + +/* The number of vertices of the graph underlying the decomposition, as stated + * in the *.td-file + */ +unsigned n_graph; + +/* The number of bags as stated in the *.td-file */ +unsigned n_decomp; + +/* The width of the decomposition as stated in the *.td-file */ +unsigned width; + +/* The number of vertices of the graph as stated in the *.gr-file */ +unsigned n_vertices; + +/* The number of edges of the graph as stated in the *.gr-file */ +unsigned n_edges; + +/* The maximal width of some bag, to compare with the one stated in the file */ +unsigned real_width; + +/* A vector to record which of the bags were seen so far, while going through + * the *.td-file, as to ensure uniqueness of each bag. + */ +std::vector bags_seen; + +/* Temporary storage for all the bags before they are inserted into the actual + * decomposition; we might as well directly manipulate the tree TODO + */ +std::vector > bags; + +/* + * Given the tokens from one line (split on whitespace), this reads the + * s-line from these tokens and initializes the corresponding globals + */ +void read_solution(const std::vector& tokens) +{ + if (current_state != COMMENT_SECTION) { + throw std::invalid_argument(INV_FMT); + } + current_state = S_LINE; + if(tokens.size() != 5 || tokens[1] != "td") { + throw std::invalid_argument(INV_SOLN); + } + + n_decomp = pure_stou(tokens[2]); + width = pure_stou(tokens[3]); + n_graph = pure_stou(tokens[4]); + if (width > n_graph) { + throw std::invalid_argument(INC_SOLN); + } +} + +/* + * Given the tokens from one line (split on whitespace), this reads the bag + * represented by these tokens and manipulates the global bags accordingly + */ +void read_bag(const std::vector& tokens) +{ + if (current_state == S_LINE) { + current_state = BAGS; + bags.resize(n_decomp); + bags_seen.resize(n_decomp,0); + } + + if (current_state != BAGS) { + throw std::invalid_argument(INV_FMT); + } + + if(tokens.size() < 2) { + throw std::invalid_argument(NO_BAG_INDEX); + } + + unsigned bag_num = pure_stou(tokens[1]); + if (bag_num < 1 || bag_num > n_decomp || bags_seen[bag_num-1] != 0) { + throw std::invalid_argument(INV_BAG_INDEX); + } + bags_seen[bag_num-1] = 1; + + for(unsigned i = 2; i < tokens.size(); i++) { + if (tokens[i] == "") break; + unsigned id = pure_stou(tokens[i]); + if (id < 1 || id > n_graph) { + throw std::invalid_argument(INV_BAG); + } + + bags[bag_num-1].insert(id); + } + + if(bags[bag_num-1].size() > real_width) { + real_width = bags[bag_num-1].size(); + } +} + +/* + * Given the tokens from one line (split on whitespace) and a tree + * decomposition, this reads the edge represented by this line (in the + * decomposition) and adds the respective edge to the tree decomposition + */ +void read_decomp_edge(const std::vector& tokens, tree_decomposition &T) +{ + if (current_state == BAGS) { + for (auto it = bags_seen.begin(); it != bags_seen.end(); it++) { + if (*it == 0) { + throw std::invalid_argument(BAG_MISSING); + } + } + + for (auto it = bags.begin(); it != bags.end(); it++) { + T.add_bag(*it); + } + current_state = EDGES; + } + + if (current_state != EDGES) { + throw std::invalid_argument(INV_FMT); + } + + unsigned s = pure_stou(tokens[0]); + unsigned d = pure_stou(tokens[1]); + if(s < 1 || d < 1 || s > n_decomp || d > n_decomp) { + throw std::invalid_argument(INV_EDGE); + } + T.add_edge(s-1, d-1); +} + +/* + * Given the tokens from one line (split on whitespace), this reads the + * p-line from these tokens and initializes the corresponding globals + */ +void read_problem(const std::vector& tokens, graph& g) { + if (current_state != COMMENT_SECTION) { + throw std::invalid_argument(INV_FMT); + } + current_state = P_LINE; + + if(tokens.size() != 4 || tokens[1] != "tw") { + throw std::invalid_argument(INV_PROB); + } + + n_vertices = pure_stou(tokens[2]); + n_edges = pure_stou(tokens[3]); + + while (g.add_vertex()+1 < n_vertices) {}; +} + +/* + * Given the tokens from one line (split on whitespace) and a tree + * decomposition, this reads the edge (in the graph) represented by this line + * and adds the respective edge to the graph + */ +void read_graph_edge(const std::vector& tokens, graph& g) +{ + if (current_state == P_LINE) { + current_state = EDGES; + } + + if (current_state != EDGES) { + throw std::invalid_argument(INV_FMT); + } + + unsigned s = pure_stou(tokens[0]); + unsigned d = pure_stou(tokens[1]); + if(s < 1 || d < 1 || s > n_vertices || d > n_vertices) { + throw std::invalid_argument(INV_EDGE); + } + g.add_edge(s-1, d-1); +} + +/* + * Given a stream to the input file in the *.gr-format, this reads from the file + * the graph represented by this file. If the file is not conforming to the + * format, it throws a corresponding std::invalid_argument with one of the error + * messages defined above. + */ +void read_graph(std::ifstream& fin, graph& g) { + current_state = COMMENT_SECTION; + n_edges = -1; + n_vertices = -1; + + if(!fin.is_open()){ + throw std::invalid_argument(FILE_ERROR); + } + + std::string line; + std::string delimiter = " "; + + while(std::getline(fin, line)) { + if(line == "" || line == "\n") { + throw std::invalid_argument(EMPTY_LINE); + } + + std::vector tokens; + size_t oldpos = 0; + size_t newpos = 0; + + while(newpos != std::string::npos) { + newpos = line.find(delimiter, oldpos); + tokens.push_back(line.substr(oldpos, newpos-oldpos)); + oldpos = newpos + delimiter.size(); + } + if (tokens[0] == "c") { + continue; + } else if (tokens[0] == "p") { + read_problem(tokens,g); + } else if (tokens.size() == 2){ + read_graph_edge(tokens, g); + } else { + throw std::invalid_argument(std::string(INV_EDGE) + " (an edge has exactly two endpoints)"); + } + } + + if (g.num_edges != n_edges) { + throw std::invalid_argument(std::string(INV_PROB) + " (incorrect number of edges)"); + } +} + +/* + * Given a stream to the input file in the *.td-format, this reads from the file + * the decomposition represented by this file. If the file is not conforming to + * the format, it throws a corresponding std::invalid_argument with one of the + * error messages defined above. + */ +void read_tree_decomposition(std::ifstream& fin, tree_decomposition& T) +{ + current_state = COMMENT_SECTION; + n_graph = -1; + n_decomp = 0; + width = -2; + bags_seen.clear(); + bags.clear(); + + if(!fin.is_open()){ + throw std::invalid_argument(FILE_ERROR); + } + + std::string line; + std::string delimiter = " "; + + while(std::getline(fin, line)) { + if(line == "" || line == "\n") { + throw std::invalid_argument(EMPTY_LINE); + } + + std::vector tokens; + size_t oldpos = 0; + size_t newpos = 0; + + while(newpos != std::string::npos) { + newpos = line.find(delimiter, oldpos); + tokens.push_back(line.substr(oldpos, newpos-oldpos)); + oldpos = newpos + delimiter.size(); + } + + if (tokens[0] == "c") { + continue; + } else if (tokens[0] == "s") { + read_solution(tokens); + } else if (tokens[0] == "b") { + read_bag(tokens); + } else { + read_decomp_edge(tokens, T); + } + } + + if (current_state == BAGS) { + for (auto it = bags.begin(); it != bags.end(); it++) { + T.add_bag(*it); + } + } + + if (width != real_width) { + throw std::invalid_argument(INV_SOLN_BAGSIZE); + } +} + +/* + * Given a graph and a decomposition, this checks whether or not the set of + * vertices in the graph equals the union of all bags in the decomposition. + */ +bool check_vertex_coverage(tree_decomposition& T) +{ + if (!(n_graph == n_vertices)) { + std::cerr << "Error: .gr and .td disagree on how many vertices the graph has" << std::endl; + return false; + } else if (n_vertices == 0) return true; + + std::vector occurrence_nums(n_graph, 0); + for (unsigned i = 0; i < n_decomp; i++) { + for (auto it = T.get_bag(i).begin(); it != T.get_bag(i).end(); it++) { + occurrence_nums[*it - 1]++; + } + } + + for (unsigned i = 0; i < occurrence_nums.size(); i++) { + if (occurrence_nums[i] == 0) { + std::cerr << "Error: vertex " << (i+1) << " appears in no bag" << std::endl; + return false; + } + } + return true; +} + +/* + * Given a graph and a decomposition, this checks whether or not each edge is + * contained in at least one bag. This has the side effect of removing from the + * graph all those edges of the graph that are in fact contained in some bag. + * The emptiness of the edge set of the resulting pruned graph is used to decide + * whether the decomposition has this property. + */ +bool check_edge_coverage(tree_decomposition& T, graph& g) +{ + std::vector > to_remove; + /* + * We go through all bags, and for each vertex in each bag, we remove all its + * incident edges. If all the edges are indeed covered by at least one bag, + * this will leave the graph with an empty edge-set, and it won't if they + * aren't. + */ + for(unsigned i = 0; i < T.bags.size() && g.num_edges > 0; i++){ + std::set& it_bag = T.get_bag(i); + for (std::set::iterator head = it_bag.begin(); head != it_bag.end(); head++) { + for(auto tail = g.neighbors(*head-1).begin(); tail != g.neighbors(*head-1).end(); tail++) { + if(it_bag.find(*tail+1) != it_bag.end()) { + to_remove.push_back(std::pair(*head,*tail)); + } + } + for (std::vector >::iterator rem_it = to_remove.begin(); rem_it != to_remove.end(); rem_it++) { + g.remove_edge(rem_it->first-1,rem_it->second); + } + to_remove.clear(); + } + } + + if (g.num_edges > 0) + { + for (unsigned u = 0; u < g.num_vertices; u++) + { + if (! g.adj_list.at(u).empty() ) + { + unsigned v=g.adj_list.at(u).front(); + std::cerr << "Error: edge {"<< (u+1) << ", " << (v+1) << "} appears in no bag" << std::endl; + break; + } + } + } + return (g.num_edges == 0); +} + +/* + * Given a graph and a decomposition, this checks whether or not the set of bags + * in which a given vertex from the graph appears forms a subtree of the tree + * decomposition. It does so by successively removing leaves from the + * decomposition until the tree is empty, and hence the decomposition will + * consist only of isolated empty bags after calling this function. + */ +bool check_connectedness(tree_decomposition& T) +{ + /* + * At each leaf, we first check whether it contains some forgotten vertex (in + * which case we return false), and if it doesn't, we compute whether there + * are any vertices that appear in the leaf, but not in its parent, and if so, + * we add those vertices to the set of forgotten vertices (Now that I come to + * think of it, 'forbidden' might be a better term TODO.) We then remove the + * leaf and continue with the next leaf. We return true if we never encounter + * a forgotten vertex before the tree is entirely deleted. + * + * It is quite easy to see that a tree decomposition satisfies this property + * (given that it satisfies the others) if and only if it satisfies the + * condition of the bags containing any given vertex forming a subtree of the + * decomposition. + */ + std::set forgotten; + + + /* A vector to keep track of those bags that were removed during the check for + * subtree connectivity */ + std::vector bags_removed; + bags_removed.resize(T.bags.size(),0); + + while (T.num_vertices > 0) { + /* In every iteration, we first find a leaf of the tree. We do this in an + * overly naive way by running through all of the tree, but given the small + * size of the instances, this does not matter much. This can be formulated + * using some form of DFS-visitor-pattern, but this gets the job done as + * well. + */ + for(unsigned i = 0; i < T.bags.size(); i++) { + /* + * If we are dealing with a leaf (or an isolated, non-empty bag, which may + * only happen in the last iteration when there is only one non-empty bag + * left, by the semantics of the remove_vertex-method), then we first see + * if there is a non-empty intersection with the set of forgotten + * vertices, and if so, return false. If this is not the case, we add all + * vertices appearing in the bag but not its parent to the set of + * forgotten vertices and continue to look for the next leaf. + */ + if (T.neighbors(i).size() == 1 || (T.neighbors(i).size() == 0 && bags_removed.at(i) == 0)) { + std::set intersection; + std::set_intersection(forgotten.begin(), forgotten.end(), + T.get_bag(i).begin(), T.get_bag(i).end(), + std::inserter(intersection, intersection.begin())); + if(!intersection.empty()) { + return false; + } + + if (T.neighbors(i).size() > 0) { + unsigned parent = T.neighbors(i).at(0); + std::set_difference(T.get_bag(i).begin(), T.get_bag(i).end(), + T.get_bag(parent).begin(), T.get_bag(parent).end(), + std::inserter(forgotten, forgotten.begin())); + } + + T.remove_vertex(i); + bags_removed.at(i) = 1; + break; + } + } + } + return true; +} + +/* + * Using all of the above functions, this simply checks whether a given + * purported tree decomposition for some graph actually is one. + */ +bool is_valid_decomposition(tree_decomposition& T, graph& g) +{ + if (!T.is_tree()) { + std::cerr << "Error: not a tree" << std::endl; + return false; + } else if (!check_vertex_coverage(T)) { + return false; + } else if (!check_edge_coverage(T,g)) { + return false; + } else if (!check_connectedness(T)) { + std::cerr << "Error: some vertex induces disconnected components in the tree" << std::endl; + return false; + } + else { + return true; + } +} + +int main(int argc, char** argv) { + bool is_valid = true; + bool empty_td_file = false; + if (argc < 2 || argc > 3) { + std::cerr << "Usage: " << argv[0] << " input.gr [input.td]" << std::endl; + std::cerr << std::endl; + std::cerr << "Validates syntactical and semantical correctness of .gr and .td files. If only a .gr file is given, it validates the syntactical correctness of that file." << std::endl; + exit(1); + } + tree_decomposition T; + graph g; + + std::ifstream fin(argv[1]); + try { + read_graph(fin,g); + } catch (const std::invalid_argument& e) { + std::cerr << "Invalid format in " << argv[1] << ": " << e.what() << std::endl; + is_valid = false; + } + fin.close(); + + if(argc==3 && is_valid) { + fin.open(argv[2]); + if (fin.peek() == std::ifstream::traits_type::eof()) { + is_valid = false; + empty_td_file = true; + } else { + try { + read_tree_decomposition(fin, T); + } catch (const std::invalid_argument& e) { + std::cerr << "Invalid format in " << argv[2] << ": " << e.what() << std::endl; + is_valid = false; + } + } + fin.close(); + + if (is_valid) { + is_valid = is_valid_decomposition(T,g); + } + } + + if (is_valid) { + std::cerr << "valid" << std::endl; + return 0; + } else if (empty_td_file) { + std::cerr << "invalid: empty .td file" << std::endl; + return 2; + } else { + std::cerr << "invalid" << std::endl; + return 1; + } +} diff --git a/solvers/TCS-Meiji/td-validate-master/test/empty/empty-td-file.gr b/solvers/TCS-Meiji/td-validate-master/test/empty/empty-td-file.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/empty/empty-td-file.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/empty/empty-td-file.td b/solvers/TCS-Meiji/td-validate-master/test/empty/empty-td-file.td new file mode 100644 index 0000000..e69de29 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-appears-twice.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-appears-twice.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-appears-twice.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-appears-twice.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-appears-twice.td new file mode 100644 index 0000000..d30113b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-appears-twice.td @@ -0,0 +1,9 @@ +s td 4 3 5 +b 1 1 2 3 +b 1 1 4 5 +b 2 2 3 4 +b 3 3 4 5 +b 4 +1 2 +2 3 +2 4 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-index-out-of-bounds.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-index-out-of-bounds.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-index-out-of-bounds.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-index-out-of-bounds.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-index-out-of-bounds.td new file mode 100644 index 0000000..cabf48d --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-index-out-of-bounds.td @@ -0,0 +1,8 @@ +s td 4 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +b 4 4 6 +1 2 +2 3 +2 4 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-no-index.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-no-index.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-no-index.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-no-index.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-no-index.td new file mode 100644 index 0000000..77e7b48 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/bag-no-index.td @@ -0,0 +1,9 @@ +s td 4 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +b 4 +b +1 2 +2 3 +2 4 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-before-s.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-before-s.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-before-s.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-before-s.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-before-s.td new file mode 100644 index 0000000..aa24eb8 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-before-s.td @@ -0,0 +1,8 @@ +1 2 +s td 4 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +b 4 +2 3 +2 4 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-label-not-an-integer.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-label-not-an-integer.gr new file mode 100644 index 0000000..8af5278 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-label-not-an-integer.gr @@ -0,0 +1,5 @@ +p tw 5 5 +1 2notanumber +2notanumber 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-labels-out-of-bounds.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-labels-out-of-bounds.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-labels-out-of-bounds.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-labels-out-of-bounds.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-labels-out-of-bounds.td new file mode 100644 index 0000000..433ce26 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/edge-labels-out-of-bounds.td @@ -0,0 +1,8 @@ +s td 4 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +b 4 +1 7 +2 3 +2 4 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/empty-line-in-gr.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/empty-line-in-gr.gr new file mode 100644 index 0000000..b7db3ad --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/empty-line-in-gr.gr @@ -0,0 +1,2 @@ +p tw 0 0 + diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/empty-line-in-td.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/empty-line-in-td.gr new file mode 100644 index 0000000..a72e348 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/empty-line-in-td.gr @@ -0,0 +1 @@ +p tw 0 0 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/empty-line-in-td.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/empty-line-in-td.td new file mode 100644 index 0000000..aa9bfed --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/empty-line-in-td.td @@ -0,0 +1,2 @@ +s td 0 0 0 + diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/halfedge.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/halfedge.gr new file mode 100644 index 0000000..4ea6edd --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/halfedge.gr @@ -0,0 +1,2 @@ +p tw 2 1 +1 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/p-appears-twice.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/p-appears-twice.gr new file mode 100644 index 0000000..986484e --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/p-appears-twice.gr @@ -0,0 +1,2 @@ +p tw 0 0 +p tw 0 0 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/p-num-edges-too-large.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/p-num-edges-too-large.gr new file mode 100644 index 0000000..afb49eb --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/p-num-edges-too-large.gr @@ -0,0 +1,5 @@ +p tw 5 5 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/p-num-edges-too-small.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/p-num-edges-too-small.gr new file mode 100644 index 0000000..b3ed4ac --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/p-num-edges-too-small.gr @@ -0,0 +1,5 @@ +p tw 5 3 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/p-num-vertices-too-small.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/p-num-vertices-too-small.gr new file mode 100644 index 0000000..208143f --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/p-num-vertices-too-small.gr @@ -0,0 +1,5 @@ +p tw 4 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/p-too-many-arguments.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/p-too-many-arguments.gr new file mode 100644 index 0000000..83fdd65 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/p-too-many-arguments.gr @@ -0,0 +1 @@ +p tw 0 0 7 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-appears-twice.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-appears-twice.gr new file mode 100644 index 0000000..a72e348 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-appears-twice.gr @@ -0,0 +1 @@ +p tw 0 0 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-appears-twice.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-appears-twice.td new file mode 100644 index 0000000..7fc6118 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-appears-twice.td @@ -0,0 +1,2 @@ +s td 0 0 0 +s td 0 0 0 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-num-bags-too-large.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-num-bags-too-large.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-num-bags-too-large.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-num-bags-too-large.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-num-bags-too-large.td new file mode 100644 index 0000000..fb157c2 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-num-bags-too-large.td @@ -0,0 +1,4 @@ +s td 3 4 5 +b 1 1 2 3 +b 2 2 3 4 5 +1 2 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-num-bags-too-small.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-num-bags-too-small.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-num-bags-too-small.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-num-bags-too-small.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-num-bags-too-small.td new file mode 100644 index 0000000..1deb39c --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-num-bags-too-small.td @@ -0,0 +1,6 @@ +s td 2 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +1 2 +2 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-too-many-arguments.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-too-many-arguments.gr new file mode 100644 index 0000000..a72e348 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-too-many-arguments.gr @@ -0,0 +1 @@ +p tw 0 0 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-too-many-arguments.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-too-many-arguments.td new file mode 100644 index 0000000..bbea876 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-too-many-arguments.td @@ -0,0 +1 @@ +s td 0 0 0 7 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-bigger-than-n_graph.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-bigger-than-n_graph.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-bigger-than-n_graph.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-bigger-than-n_graph.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-bigger-than-n_graph.td new file mode 100644 index 0000000..79c14fc --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-bigger-than-n_graph.td @@ -0,0 +1,8 @@ +s td 4 6 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +b 4 +1 2 +2 3 +2 4 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-too-large.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-too-large.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-too-large.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-too-large.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-too-large.td new file mode 100644 index 0000000..a532e66 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-too-large.td @@ -0,0 +1,6 @@ +s td 3 4 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +1 2 +2 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-too-small.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-too-small.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-too-small.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-too-small.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-too-small.td new file mode 100644 index 0000000..ac6ccdc --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/s-width-too-small.td @@ -0,0 +1,6 @@ +s td 3 2 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +1 2 +2 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-bag-edges-mixed.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-bag-edges-mixed.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-bag-edges-mixed.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-bag-edges-mixed.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-bag-edges-mixed.td new file mode 100644 index 0000000..28dd54c --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-bag-edges-mixed.td @@ -0,0 +1,6 @@ +s td 3 3 5 +b 3 3 4 5 +1 2 +b 2 2 3 4 +2 3 +b 1 1 2 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-2.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-2.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-2.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-2.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-2.td new file mode 100644 index 0000000..23f49a7 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-2.td @@ -0,0 +1,9 @@ +s td 5 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +b 4 +b 5 +1 2 +2 3 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-3-noroot.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-3-noroot.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-3-noroot.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-3-noroot.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-3-noroot.td new file mode 100644 index 0000000..c20869e --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-3-noroot.td @@ -0,0 +1,5 @@ +s td 3 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +2 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-4-cycle.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-4-cycle.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-4-cycle.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-4-cycle.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-4-cycle.td new file mode 100644 index 0000000..eca1a89 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree-4-cycle.td @@ -0,0 +1,8 @@ +s td 4 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +b 4 +1 2 +2 3 +3 1 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree.td new file mode 100644 index 0000000..5a72d81 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-a-tree.td @@ -0,0 +1,7 @@ +s td 3 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +1 2 +2 3 +1 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-all-edges-covered.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-all-edges-covered.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-all-edges-covered.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-all-edges-covered.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-all-edges-covered.td new file mode 100644 index 0000000..6d7a952 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-all-edges-covered.td @@ -0,0 +1,6 @@ +s td 3 4 5 +b 1 2 3 +b 2 2 3 4 +b 3 1 3 4 5 +1 2 +2 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-all-vertices-covered.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-all-vertices-covered.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-all-vertices-covered.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-all-vertices-covered.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-all-vertices-covered.td new file mode 100644 index 0000000..5583c38 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-all-vertices-covered.td @@ -0,0 +1,6 @@ +s td 3 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 +1 2 +2 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-connected.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-connected.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-connected.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-connected.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-connected.td new file mode 100644 index 0000000..530bd47 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-not-connected.td @@ -0,0 +1,8 @@ +s td 4 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +b 4 1 +1 2 +2 3 +3 4 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-s-line-wrong-order.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-s-line-wrong-order.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-s-line-wrong-order.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-s-line-wrong-order.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-s-line-wrong-order.td new file mode 100644 index 0000000..a213cf5 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-s-line-wrong-order.td @@ -0,0 +1,6 @@ +b 1 1 2 3 +s td 3 3 5 +b 2 2 3 4 +b 3 3 4 5 +1 2 +2 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-s-num-vertices-does-not-match-graph.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-s-num-vertices-does-not-match-graph.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-s-num-vertices-does-not-match-graph.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/td-s-num-vertices-does-not-match-graph.td b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-s-num-vertices-does-not-match-graph.td new file mode 100644 index 0000000..2e3e6bc --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/td-s-num-vertices-does-not-match-graph.td @@ -0,0 +1,6 @@ +s td 3 3 4 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 +1 2 +2 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/invalid/wrong-order.gr b/solvers/TCS-Meiji/td-validate-master/test/invalid/wrong-order.gr new file mode 100644 index 0000000..3fdacd1 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/invalid/wrong-order.gr @@ -0,0 +1,5 @@ +1 2 +p tw 5 4 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/n_upto_8.graph6 b/solvers/TCS-Meiji/td-validate-master/test/n_upto_8.graph6 new file mode 100644 index 0000000..df0d974 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/n_upto_8.graph6 @@ -0,0 +1,2753 @@ +C~ 3 +Dr{ 3 +D^{ 3 +D~{ 4 +Es\o 3 +Es\w 3 +EFzw 3 +EF~w 3 +E`~o 3 +E`~w 3 +EqNw 3 +E{Sw 3 +Eqlw 3 +Ed^w 3 +ER~w 3 +EN~w 4 +ER~o 3 +Et\w 4 +Er^w 4 +E}lw 4 +Er~w 4 +E^~w 4 +E~~w 5 +F?~v_ 3 +F?~vg 3 +F?~vw 3 +F?~~w 3 +F@rvo 3 +F@rvw 3 +F@r~o 3 +F@r~w 3 +F_lv_ 3 +FCxvg 3 +F_lvw 3 +F_}rg 3 +F@vvo 3 +FSpzw 3 +FCx~g 3 +F@vvw 3 +F_l~w 3 +F@v~w 3 +F@~vg 4 +FC~rw 3 +F@~vw 4 +F@~~w 4 +FoSvw 3 +F`dn_ 3 +FQo~g 3 +F`dnw 3 +FoS~_ 3 +FBqng 3 +FoS~w 3 +FPV^o 3 +FQqzw 3 +FPp}w 3 +F`U~w 3 +FDZ^o 3 +FDZ^w 3 +FDZ~o 3 +FDZ~w 3 +Fo\sw 3 +FRY]w 4 +FS\vw 4 +FS\~_ 4 +FBzvo 4 +Fdhzw 4 +FiM~w 4 +FIm~g 4 +FIm~w 4 +FBz~w 4 +Fqdhw 4 +FsXXw 3 +Folqw 3 +FXd]w 4 +FP^Uw 4 +FFhmw 3 +FD^fw 4 +FDx~g 4 +Fclzw 4 +FD^vW 4 +FHu~w 4 +FEl~W 3 +FEl~w 3 +FD^~w 4 +Fimzw 4 +FI~tw 4 +FD^~o 4 +FFx~w 4 +FB~~w 4 +F@~v_ 4 +Fdhzo 4 +Fs\rw 4 +Fs\vw 4 +F}oxw 4 +F]qzw 4 +Fs\zw 4 +Fs\~w 4 +FF~vW 4 +FFz~w 4 +FF~~w 4 +F`o~_ 3 +F`o~g 3 +F`o~w 3 +F`N^o 3 +F`qzw 3 +F`N^w 3 +F`N~o 4 +F`N~w 4 +FQluW 3 +FwdXw 3 +FRh]w 3 +FTX^w 3 +Fqoxw 4 +FqhXw 3 +F`luW 3 +FpL]w 4 +FdW}w 3 +F`lvw 4 +FQl~_ 3 +F`^vo 4 +FTpzw 4 +FhN^w 4 +FKl~g 3 +FQl~w 3 +F`l~g 4 +FKuzw 3 +F`l~w 4 +F`^~w 4 +F{dzw 4 +F`~rw 4 +F`~vw 4 +F`~~w 4 +FqL~o 4 +FRqzw 4 +FdYzw 4 +FqL~w 4 +FqN~o 4 +FqN~w 4 +FD^vO 4 +F`l~_ 4 +FdlrW 4 +F{Szw 4 +F{S~w 4 +F}hXw 4 +Frqzw 4 +Fhuzw 4 +Fql~w 4 +FJz\w 4 +FMl~w 4 +Fd^~w 4 +F\^]w 4 +FMn~o 4 +FR~vw 4 +FR~~w 4 +Fqlzw 4 +Fdlzw 4 +FT\}w 4 +Fd\~w 4 +FT\~w 4 +FJn~w 4 +FJ~~w 5 +FN~~w 5 +F]\|w 4 +Fj]|w 5 +Fr\~w 5 +Ft\~w 5 +Fr^~w 5 +F}lzw 5 +F}l~w 5 +Fr~~w 5 +F^~~w 5 +F~~~w 6 +G?B~vo 3 +G?B~vs 3 +G?B~v{ 3 +G?B~~{ 3 +Gs?Jzw 3 +G?Ffvw 3 +Gs?Jz{ 3 +G?Ffv{ 3 +G?Ff~w 3 +G?Ff~{ 3 +G_Azvo 3 +G?brvs 3 +G_Azv{ 3 +G?]Nng 3 +G_B|rs 3 +G?`~vw 3 +G?]Nnk 3 +G?br~s 3 +G?`~v{ 3 +G_Az~{ 3 +G?Fn~{ 3 +G?F~vo 4 +G?F~vs 4 +G?b~r{ 3 +G?F~v{ 4 +G?F~~{ 4 +G?dffw 3 +G?dff{ 3 +GSPDzw 3 +G?NFnw 3 +GSPDz{ 3 +G?NFn{ 3 +G?df~w 3 +G?df~{ 3 +Go@Xvo 3 +G?Nefs 3 +Go@Xv{ 3 +Gk_axw 3 +GCW^Ng 3 +GCDnVg 3 +Gk_ax{ 3 +G@FNVw 3 +G@omnk 3 +G__zvk 3 +G_O|v{ 3 +G?svNg 3 +G?dvVg 3 +G?NVVw 3 +G?yRnk 3 +G?qrvk 3 +G?NVV{ 3 +Go@X~o 3 +G@Fe^s 3 +Go@X~{ 3 +GWCm}w 3 +G@o^nw 3 +GWCm}{ 3 +G@o^n{ 3 +GCDnvW 3 +GCDn^w 3 +GCDnv[ 3 +GCFfZ{ 3 +G__z~{ 3 +G?NVvW 3 +G?dv^w 3 +G?NVv[ 3 +G?qr~{ 3 +G?NV~w 3 +G?NV~{ 3 +GsP@xw 3 +G?o~fg 3 +GsP@x{ 3 +G?dnfw 3 +G?o~fk 3 +G?dnf{ 3 +G?o~vg 3 +G?o~nw 3 +GCbbz{ 3 +G?o~vk 3 +G?o~n{ 3 +G?dn~w 3 +G?dn~{ 3 +GC`zvo 4 +Go@zs{ 3 +G@J]vs 4 +GC`zv[ 4 +G@J]v{ 4 +G?Nvvo 4 +G@J}us 4 +G?Nvvw 4 +G?Nvvs 4 +G?Nvv{ 4 +G@J]~s 4 +G@J]~{ 4 +G?Nv~{ 4 +G?d~fo 4 +G_]Jlk 4 +G?{vMk 3 +GC[^NK 4 +G?N^fS 3 +GoAzq{ 3 +G?frvs 4 +G@F^Vs 4 +GCD~V[ 4 +G?d~f[ 3 +G@Fmv{ 4 +G?lve[ 3 +G?w}nk 3 +G?qzvk 3 +G?lu^{ 3 +G@Fnuw 4 +G?N^vw 4 +GBE^^[ 4 +G@F^^s 4 +G?N^v[ 3 +G?N^v{ 4 +G?fr~s 4 +G_azz{ 4 +G@Fm~{ 4 +G?qz~k 3 +G?qz~{ 3 +G?N^~{ 4 +G?{~nk 4 +G@J~u{ 4 +G?f~r{ 4 +G?N~v{ 4 +G?N~~{ 4 +Gs@ipo 3 +GoSsZc 4 +G]`@W{ 3 +GDZ?~K 4 +GBFLVK 3 +GEEjV[ 4 +G?urf[ 3 +G?urf{ 4 +G`iayw 4 +G?}reK 3 +GHI]uw 4 +G?lvew 4 +GQKu][ 4 +G?lvfw 4 +GPH]u{ 4 +GDW^M{ 4 +G?lvf{ 4 +G?zTjo 3 +GPSmm[ 4 +G?urno 4 +G?luvK 3 +Gs@ix{ 3 +G?nRns 4 +GEEj^[ 4 +G?urn[ 3 +GBFL^{ 4 +G?lvvg 4 +G?lvnw 4 +GHI]}{ 4 +G?lvvk 4 +G?lvn{ 4 +G?uvZw 3 +G?lu~w 4 +G?nR~k 4 +G?lv]{ 3 +G?lu~{ 4 +G?lv~w 4 +G?lv~{ 4 +GS`zro 4 +GIQ|vo 4 +GLiay{ 4 +GPJ]q{ 4 +GBFnS{ 4 +G?^vfs 4 +GIQ|t{ 4 +GHJ]t{ 4 +GIQ|v{ 4 +GIR|ts 4 +G?l~vg 4 +G?^vvw 4 +G?|vnk 4 +G?^vns 4 +G?|vn{ 4 +GS`zz{ 4 +GIQ|~{ 4 +G?^v~{ 4 +G?nrvc 4 +GFFLZ[ 4 +G?l~fc 4 +G?]vmw 4 +G?}rm[ 3 +G?}rnk 4 +G?l~fk 4 +G?}rm{ 4 +G?l~f{ 4 +G?^vt{ 4 +G?l~nk 4 +G?l~vk 4 +G?l~n{ 4 +G?uz~k 4 +G?z\z{ 3 +G?uz~{ 4 +G?l~~{ 4 +G?|~nk 4 +G?^~vk 4 +G?l~~w 4 +G?~r~{ 4 +G?^~~{ 4 +Gs`zro 4 +G]r@x{ 4 +GiQ|p{ 4 +G?N~vo 4 +G?~vfk 4 +G?~vb{ 4 +G?~vf{ 4 +G?~~fc 4 +G?~vvk 4 +Gs`zz{ 4 +G?~vn{ 4 +G?~~vk 4 +G?~v~{ 4 +G?~~~{ 4 +G`?N~w 3 +G`?N~{ 3 +G_G^fw 3 +G_G^f{ 3 +G``Dzw 3 +G@bB~w 3 +G``Dz{ 3 +G@bB~{ 3 +G_G^~w 3 +G_G^~{ 3 +G`AZVo 3 +GAJcvs 3 +G`AZV{ 3 +GsOaxw 3 +GCSnNg 3 +GCO~Vg 3 +GsOax{ 3 +GAI^Vw 3 +GCojnk 3 +GA`lvk 3 +G_Sln{ 3 +G`AZ^o 3 +GAJc~s 3 +G`AZ^{ 3 +GCSv^W 3 +GAI^vw 3 +GCSv^[ 3 +GAI^v{ 3 +GCLNnW 3 +GCO~^w 3 +GCLNn[ 3 +GCRdz{ 3 +GA`l~{ 3 +GAI^~w 3 +GAI^~{ 3 +G`BHvo 3 +GGFcvs 3 +G`BHv{ 3 +G{?Ixw 3 +GGc^Ng 3 +GGE^Vg 3 +G{?Ix{ 3 +GGE^Vw 3 +G@qJnk 3 +G@bJvk 3 +G_C~V{ 3 +G`BH~o 3 +GGFc~s 3 +G`BH~{ 3 +GKC^^W 4 +G`C^^w 4 +GKC^^[ 4 +G`C^^{ 4 +G_K^nW 3 +GGE^^w 3 +G_K^n[ 3 +G_Fdz{ 3 +G@bJ~{ 3 +GGE^~w 4 +GGE^~{ 4 +GCQzvo 3 +G_sljk 3 +GOD}vS 3 +GQAzu[ 3 +G@bZvs 3 +GCQzv[ 3 +GAJ\v{ 3 +GGF\vo 4 +G`Azu[ 3 +GOFZvs 4 +G_Ezv[ 3 +G_Ezv{ 4 +G@]Nng 4 +GAJ}ts 3 +G_F|rs 4 +G@`~vw 4 +G@]Nnk 4 +G@`~v{ 4 +G@bZ~s 3 +GAJ\~{ 3 +GOFZ~s 4 +GGb\z{ 3 +G_Ez~{ 4 +GGE~~{ 4 +GGF~vo 4 +GGF~vs 4 +G@`~~w 4 +GGF~v{ 4 +GGF~~{ 4 +GoDPV{ 3 +Gq?g~o 3 +G@VDNs 3 +Gq?g~{ 3 +GoDP^o 3 +GAj@ns 3 +GoDP^{ 3 +GqGR[w 3 +GGMU^g 3 +GGcu^g 3 +GqGR[{ 3 +GGcu^w 3 +GAq`~k 3 +G@qa~k 3 +GOLU^{ 3 +GqGTYw 3 +GHEM^g 3 +GH_]^g 3 +GqGTY{ 3 +GB_m^w 3 +Gg_X~k 3 +GH`K~k 3 +GgCk~{ 3 +GKG]^g 3 +GQG]^w 3 +G`_i~k 3 +GQG]^{ 3 +GqGUXw 3 +G@hU^g 3 +GqGUX{ 3 +G@hU^w 3 +GAot^k 3 +G@hU^{ 3 +GQG^]w 3 +GPO]~w 3 +GQG^]{ 3 +GPO]~{ 3 +G_LT~W 3 +G_ddzw 3 +GgCm|w 3 +GBIM~w 3 +G_LT~[ 3 +G_ddz{ 3 +GgCm|{ 3 +GBIM~{ 3 +G@hV~w 3 +G@hV~{ 3 +GsOiho 4 +Gr`?x[ 4 +GqMAXk 4 +GBJKvK 4 +G@tcnK 4 +GDYInK 4 +GANLfK 4 +GELc^[ 4 +GCLuV[ 4 +GCUjf{ 4 +G_hZtk 4 +GOLuu[ 4 +GCUjno 4 +GqAZX{ 4 +G@fR^s 4 +GCLu^[ 4 +GANLn{ 4 +GSSjm[ 4 +GONRu[ 4 +G@h]no 4 +GDSnM[ 4 +G_h\rk 4 +G@jQ~s 4 +GEO|^[ 4 +GCozn[ 4 +G@Y]ns 4 +GBJK~{ 4 +G_]tQk 4 +GGMuuw 4 +G@Unew 4 +GRG]][ 4 +G@h^fw 4 +GPW]m{ 4 +GBI^U{ 4 +G@h^f{ 4 +G@h^vg 4 +G@h^nw 4 +GOfRz{ 4 +G@h^vk 4 +G@h^n{ 4 +G@Y]~w 4 +G@fR~[ 4 +G_dlz{ 4 +G@Y^m{ 4 +GCNJ~{ 4 +G@h^~w 4 +G@h^~{ 4 +G`iRYw 4 +G_mrQk 4 +GQWs}w 4 +GAYtuw 4 +GGUtuw 4 +GII\uw 4 +GWLS}[ 4 +GMCm\[ 4 +GKSu\[ 4 +GAYtvw 4 +GPXS}{ 4 +GBiR]{ 4 +GREJ]{ 4 +GPUJm{ 4 +GBiR^{ 4 +GoMQzW 4 +G_ltQk 4 +GYCk}w 4 +G@ZTuw 4 +GANduw 4 +GGNTuw 4 +GiG\[{ 4 +G@Vdvw 4 +GXEI}{ 4 +G@jRu{ 4 +G@yRm{ 4 +GPoZm{ 4 +G@fbv{ 4 +GII[~o 4 +GSDmr[ 4 +GII[~s 4 +GcOx~[ 4 +GII[~{ 4 +G`XT|w 4 +GiG\~w 4 +G`XT|{ 4 +GiG\~{ 4 +GAYt~o 4 +GII\~w 4 +GAYt~s 4 +G@jR}{ 4 +GONR}{ 4 +GII]|{ 4 +GD`j~{ 4 +G@jR~o 4 +GHFL~w 4 +G@jR~s 4 +GCVdz{ 4 +G@jR~{ 4 +GANf~w 4 +GANf~{ 4 +G{O_w{ 4 +GFII^K 4 +GANcvK 4 +GF`H^[ 4 +GCS~F[ 4 +GCS~F{ 4 +GO]Rm[ 4 +GAM^No 4 +GqBHx{ 4 +GCS~VK 4 +GANc~s 4 +GCS~N[ 4 +GAM^Ns 4 +GANc~{ 4 +GCS~^w 4 +GCS~^[ 4 +GAM^n[ 4 +GCS~^{ 4 +GCS~~w 4 +GCS~~{ 4 +GC\t^c 4 +G_]rk{ 4 +GC]jnc 4 +G_mrY{ 4 +G@Yu}w 4 +GPDm}w 4 +GGl\nk 4 +GC\t]{ 4 +GO\s}{ 4 +GChzvk 4 +GC\t^{ 4 +G@na~c 4 +GFHm[{ 4 +G@jZvc 4 +GBJL}w 4 +G@Unmw 4 +G@l^Nk 4 +G@x\nk 4 +G@Y}u{ 4 +G@^c}{ 4 +G@jZvk 4 +G@U~V{ 4 +GbElY{ 4 +G@U~fS 4 +G@frvS 4 +G_lsz[ 4 +GPFJ}w 4 +GAMnmw 3 +GAk~Nk 4 +GAw|nk 4 +GOL}u{ 4 +GCL~U{ 4 +GONZvk 4 +GCL~V{ 4 +GIJ\vo 4 +GCjrrs 4 +G@Zs}s 4 +GPFi}s 4 +GIJ\s{ 4 +GANvS{ 4 +GAZtvs 4 +GIJ\t{ 4 +GHFmt{ 4 +GIJ\v{ 4 +GIJ}ts 4 +GANnvw 4 +GaXt|{ 4 +GAZt~s 4 +GAlnn{ 4 +GGU|~s 4 +GChz~k 4 +GGU|~{ 4 +GIJ\~{ 4 +GANn~{ 4 +GDVb[{ 4 +GANe|w 4 +GES|^[ 4 +GCszn[ 4 +GCUzv{ 4 +GAM~vw 4 +GHg}}{ 4 +G@h}~s 4 +GBM^]{ 4 +GAM~^s 4 +G@]^n{ 4 +G@Vnt{ 4 +GCpz|{ 4 +G@Z\~{ 4 +GAe~Z{ 4 +GAN\~{ 4 +GGs~l{ 4 +G@h}~k 4 +G_ezz{ 4 +GGN\~{ 4 +GAM~~{ 4 +GFS~^[ 4 +GAZ~t{ 4 +G@f~r{ 4 +GAN~v{ 4 +GAN~~{ 4 +GoD_~o 3 +G_N@ns 3 +G@r@ns 3 +GoD_~{ 3 +Gr?KzW 3 +G_Ku^g 3 +Gr?Kz[ 3 +G_Ku^w 3 +G@r@~k 3 +G_Ku^{ 3 +Gr?LYw 3 +G`G]^g 3 +Gr?LY{ 3 +G`G]^w 3 +G``H~k 3 +G`G]^{ 3 +G_Kv]w 3 +G`G^]w 4 +G`G]~w 4 +G_Kv]{ 3 +G`G^]{ 4 +G`G]~{ 4 +G`G^~w 4 +G`G^~{ 4 +GqMBG{ 4 +GII[vK 4 +GaMHnK 4 +G`IYvK 3 +GcKq^[ 4 +GKSs^[ 4 +GK`Xv{ 4 +GoKq}W 4 +GBYc}w 4 +GKO|uw 4 +GQO|uw 4 +GQO|vw 4 +GBYc}{ 4 +GPUR]{ 4 +GPTT]{ 4 +GBYc~{ 4 +G_urHs 3 +GIMc}w 4 +GKH\uw 4 +GCXtuw 3 +GOTtuw 4 +GcLTZ[ 4 +Godax{ 3 +GbCm\[ 4 +GCXtvw 4 +GPYQ}{ 4 +GPQZu{ 4 +GBYT]{ 3 +GRO\]{ 4 +GBYT^{ 4 +G_]uHs 3 +GhG[}w 4 +G`H\uw 4 +G_Ltuw 4 +GaKu\[ 4 +G_Ltvw 4 +GpGY}{ 4 +G`IZu{ 4 +G`W\m{ 4 +G_Mrv{ 4 +GKIY~o 3 +GgK]l[ 3 +GKW\m[ 3 +GSLMj[ 3 +G`Slm[ 3 +GKQX~s 3 +G`QX~s 3 +GcGy~[ 3 +GKIY~[ 3 +G`QX~{ 3 +GK`X~o 4 +GKSlm[ 4 +G`IY~s 4 +GK`X~[ 4 +G`IY~{ 4 +GDjBzw 4 +GhG]~w 4 +GDjBz{ 4 +GhG]~{ 4 +GBYd}w 4 +GI_|~w 4 +GBYd}{ 4 +GI_|~{ 4 +GAir~o 4 +GH`\~w 4 +GAir~s 4 +G`Q\z{ 4 +G@rTz{ 3 +GGfR|{ 4 +GB`l~{ 4 +G_Mr~o 4 +G_Lt~w 4 +G_Mr~s 4 +G`I]z{ 4 +G_Mr~{ 4 +GCXv~w 4 +GCXv~{ 4 +G`QXvK 3 +GIMS^K 3 +G`UHnK 3 +G`MQ^K 3 +GSLQ^[ 3 +GKMQ^[ 3 +GKdP^{ 3 +GoStIs 3 +GqGW~K 4 +GpDG~K 3 +GoDXvK 3 +G[CY^[ 4 +GoSo~[ 3 +GWEYv{ 4 +G`qihs 3 +GbGk}w 4 +GaG|uw 3 +GgC|uw 4 +GgC|vw 4 +GbGk}{ 4 +G`MR]{ 3 +GpCZ]{ 4 +G`Ma~{ 4 +G_urPk 3 +GJ_k}w 3 +GQH\uw 3 +GAhtuw 3 +GGdtuw 3 +GQLT][ 3 +GoN@y{ 3 +GwCky{ 3 +GAhtvw 3 +GPhQ}{ 3 +GPTLm{ 3 +GDYR]{ 3 +GTOZ]{ 3 +GDYR^{ 3 +G_zPpk 3 +GMGk}w 4 +GWD\uw 4 +G@ptuw 3 +GkC\Z[ 4 +G@ptvw 4 +GPNA}{ 4 +GYC\]{ 4 +G@^Dm{ 3 +G@^Dn{ 4 +GWEY~o 4 +GgK^K{ 3 +GgEX~s 4 +GoDX~[ 3 +GoDX~{ 4 +GLaJzw 4 +GhC^^w 4 +GLaJz{ 4 +GhC^^{ 4 +G`MJ~g 4 +GgC|~w 4 +G`MJ~k 4 +G`_z~{ 4 +GCYr~o 3 +GGdt~w 3 +GCYr~s 3 +GKQ\z{ 3 +GAjR|{ 3 +G_NTz{ 3 +GBaj~{ 3 +G@nB~g 4 +GWD\~w 4 +G@nB~k 4 +GgE\z{ 4 +G@qr~{ 4 +G@pv~w 4 +G@pv~{ 4 +G[dAXk 3 +GDPkvK 3 +GDYQ^K 3 +G@^CnK 3 +GANTVK 3 +GEcjN[ 3 +GCNRV[ 3 +GCUrV{ 3 +GtPHOk 3 +GoLTIs 4 +GqMAh[ 3 +GdhAXk 3 +Gm__x[ 3 +GHqO~K 4 +GbIG~K 4 +G_]PnK 3 +G`U_~K 4 +GHFKvK 3 +G`DkvK 3 +GUCi^[ 4 +GSDZV[ 4 +GKC}V[ 4 +G_[sn[ 3 +GGM]f{ 4 +G_}ahk 3 +G@Yuuw 4 +GH_}uw 4 +GCLnew 3 +GGM^ew 4 +GaLT\[ 4 +GAMnfw 4 +GDWmm{ 4 +GPLU]{ 4 +G@]VM{ 3 +GPS^M{ 4 +G@UvV{ 4 +GCUr^o 3 +G@]VM[ 3 +GqAix{ 3 +GANT^s 3 +GCUr^[ 3 +GANT^{ 3 +GcLLj[ 4 +GGM]no 4 +GGlVK{ 3 +GaK^L[ 4 +GOL]nS 3 +GGNS~s 4 +GcCz^[ 4 +GHE]^s 4 +G_czn[ 3 +GHFK~{ 4 +GAg}no 3 +GoKZm[ 3 +GOlRm[ 3 +GDO~U[ 3 +GAjP~s 3 +GDQZ^s 3 +GEH\^[ 3 +GCYq~[ 3 +GDPk~{ 3 +GGc}no 4 +GKO|u[ 4 +G_Lut[ 3 +GQK^M[ 4 +G_L\nS 3 +GoFPz[ 3 +GGfP~s 4 +G`EZ^s 4 +GKC}^[ 4 +GGc}n[ 3 +G`Dk~{ 4 +G@h^e[ 3 +GOL^e[ 3 +GGqX~k 3 +G@qi~k 3 +G_hX~{ 3 +G@UvvW 4 +GAMnnw 4 +GCjRz{ 4 +G@Uvv[ 4 +G@hu~{ 4 +GAg~nw 4 +GcIZz{ 4 +GB_~^{ 4 +GCLm~w 3 +GCNR~[ 3 +GEI^Z{ 3 +GCLm~{ 3 +GOL]~w 4 +GHE^]{ 4 +G`E^Z{ 4 +G_L\~[ 3 +G_L\~{ 4 +GGc~~w 4 +GGc~~{ 4 +Gv?IXW 3 +GwC[Zc 3 +G[dBG{ 3 +G}?HW{ 3 +GKYO~K 3 +GJEK^K 3 +G`N?~K 3 +G`FHvK 3 +GeCh^[ 3 +GKEZV[ 3 +G_spn[ 3 +GGeZf{ 3 +G`r@xw 3 +G_~@hk 3 +G`G}uw 3 +G_K~ew 4 +G`Ku][ 3 +G_K~fw 4 +G`G}u{ 3 +G`K^M{ 4 +G_K~f{ 4 +G`O|u[ 3 +GGeZno 3 +GGdvS{ 3 +G`K^M[ 3 +GoFax{ 3 +G_K~Uk 3 +G_NP~s 3 +GKEZ^[ 3 +GKEZ^s 3 +GGeZn[ 3 +G`FH~{ 3 +G@o~e[ 3 +G_K~e[ 3 +G_ox~k 3 +G@rH~k 3 +G_ox~{ 3 +G`Kv]w 4 +G_K~nw 4 +GKaZz{ 4 +G`Kv]{ 4 +G_K~n{ 4 +G_K}~w 4 +GKE^Z{ 4 +G_K~]{ 3 +G_K}~{ 4 +G_K~~w 4 +G_K~~{ 4 +GtP@Ww 3 +GsLAXk 3 +Gja@W{ 3 +GCXsvK 3 +GBYS^K 3 +GC\cnK 3 +G@NUVK 3 +GEKmN[ 3 +GCdrV[ 3 +GCdrV{ 3 +G@Neuw 4 +GWC}uw 4 +G@NNew 3 +GsCZZ[ 4 +G@NNfw 4 +GFGm]{ 4 +GXC]]{ 4 +G@NVU{ 3 +G@Nev{ 4 +G_ozls 3 +GCdr^o 3 +G@NVU[ 3 +GwAYx{ 3 +G@NU^s 3 +GCdr^[ 3 +G@NU^{ 3 +G@o}no 3 +GSOzu[ 3 +G_[tm[ 3 +GWK]m[ 3 +GoEZj[ 3 +G@rP~s 3 +GCYZns 3 +GEG}^[ 3 +GCYZn[ 3 +GCXs~{ 3 +G@Ne~o 4 +G@NNnw 4 +GCfbz{ 4 +G@Ne~s 4 +G@Ne~{ 4 +G@o~nw 4 +GoEZz{ 4 +G@o~n{ 4 +G@NM~w 3 +G@NV]{ 3 +GCY^j{ 3 +G_o|z{ 3 +GCdj~{ 3 +G@o~~w 4 +G@o~~{ 4 +GThYrK 4 +GoDzvo 4 +G`Q|q{ 4 +G`G}}w 4 +GoDzs{ 4 +GoDzvs 4 +GK`zt{ 4 +GoDzt{ 4 +GoDzv{ 4 +Gsdbzw 4 +GoFzrs 4 +G@rvvw 4 +Gsdbz{ 4 +G@rvv{ 4 +GoDz~s 4 +GoDz~{ 4 +G@rv~{ 4 +G_Nrvo 4 +GRYa{{ 4 +GAjrts 4 +G_Ntrs 4 +GRiay{ 4 +G`Q}p{ 3 +G`Iy}s 4 +G@rp}s 3 +G`Fjs{ 4 +GCZrvs 4 +GCX~fs 4 +GI`|t{ 4 +GH`}t{ 4 +G`H}t{ 4 +G_Nrv{ 4 +GC]r^c 4 +GCljnc 4 +G@]u]k 3 +GJFL[{ 4 +GC\lnc 4 +GREmY{ 4 +GAjR|w 3 +G`Ej}w 4 +G_urX{ 3 +GOlZnk 4 +GGmZnk 4 +GAizvk 4 +GC]r]{ 3 +GO]q}{ 4 +GC]r^{ 4 +G_Mzvc 4 +G`FLzw 3 +G_luX{ 3 +G_kznk 4 +G_Mzvk 4 +G_lp}{ 4 +G_Mzv{ 4 +G_Nvrw 4 +G_L~vw 4 +GIpt|{ 4 +G_L~ns 4 +G_[~n{ 4 +G_Nr~s 4 +GK`z|{ 4 +G_Nr~{ 4 +GGd|~s 4 +GGez~s 4 +GAh|~k 4 +G@qz}{ 3 +GGd|~{ 4 +G_[~l{ 4 +G_Mz~k 4 +GGf\z{ 4 +G_Mz~{ 4 +G_L~~{ 4 +GgEzvo 4 +G`]Jlk 4 +G@~Djk 4 +G`\Llk 4 +G`Q\zw 4 +GoEzq{ 4 +GAjp}s 3 +G`Fh}s 4 +G@rrvs 4 +G@p~fs 4 +GgD|t{ 4 +GKFjt{ 4 +GWD}t{ 4 +GgEzv{ 4 +G@lve[ 4 +G@nJnc 4 +G@rTzw 3 +G_zPx{ 3 +G@nJnk 4 +G@qzvk 4 +G@lu]{ 3 +G@lu^{ 4 +GgE~rw 4 +G@p~vw 4 +GIVd|{ 4 +GgD|~s 4 +G@p~v{ 4 +G@rr~s 4 +G``z|{ 4 +GgEz~{ 4 +G@qz~s 4 +G@qz~k 4 +G@qz~{ 4 +G@p~~{ 4 +G_{~nk 4 +GoF~r{ 4 +GCZ~r{ 4 +G@r~r{ 4 +G@r~v{ 4 +G@r~~{ 4 +GEGm^g 3 +GWC]^w 3 +GBaJ^k 3 +GWC]^{ 3 +GwCP}W 3 +G@ou^g 3 +G@NE^g 3 +GwCP}[ 3 +G@ou^w 3 +G@ou^k 3 +G_op~k 3 +G@ou^{ 3 +GWC]~w 4 +GWC]~{ 4 +GWC]~W 3 +G_otzw 3 +GEGm~w 3 +GWC]~[ 3 +G_otz{ 3 +GEGm~{ 3 +G@NF~w 4 +G@NF~{ 4 +GC^bk{ 4 +G@Ne}w 4 +GEK}^[ 4 +GCdzv[ 4 +GCdzv{ 4 +GFJMX{ 4 +G@NvUs 4 +G@NNmw 3 +G@w}nk 4 +G@Nmu{ 4 +G@Nmvk 4 +G@Nmv{ 4 +G@N^vw 4 +GWK}}{ 4 +G@N^u{ 4 +G@N^v{ 4 +GCfjz{ 4 +G@N]~{ 4 +G@w}~k 4 +G@Nv]{ 4 +G@Nm~{ 4 +G@N^~{ 4 +G@N~vs 5 +G@N~u{ 4 +G@N~v{ 5 +G@N~~{ 5 +GBZc|s 4 +GINLno 4 +GBZc{{ 4 +GINLk{ 4 +GBZc~s 4 +GINLl{ 4 +GINLn{ 4 +G_]rno 4 +G_ltjs 4 +GpHY{{ 4 +G_\ttk 4 +GC^bns 4 +GYO{|{ 4 +GC\vNs 4 +GLJI|{ 4 +GhH[|{ 4 +G_]rn{ 4 +GBdn^w 4 +GIo||{ 4 +GBZe|{ 4 +G_\t~k 4 +GBdn^{ 4 +GBY\~w 4 +GHUm|{ 4 +GBY]|{ 4 +GGmuz{ 4 +GBY\~{ 4 +GC\t~W 4 +G_mrzw 4 +GC\v^w 4 +G_\t|{ 4 +G_\t~{ 4 +GC\v~w 4 +GC\v~{ 4 +G_\tc[ 4 +GqKsY[ 4 +Go]RG{ 3 +GsX_w{ 4 +GdDjS{ 4 +GCxrc{ 4 +GMhHk{ 4 +GkKq[{ 4 +GqOxs{ 4 +GYO{t{ 4 +Gi_xt{ 4 +GqOxv{ 4 +G?l~f_ 4 +G``zto 4 +G`hZtg 4 +GC\t^_ 4 +G@U~fO 4 +G_Mzv_ 4 +G@NvUo 4 +GSprrw 4 +G_lvbw 4 +GGuvbw 4 +GRr@x{ 4 +GbY`{{ 4 +GaYp|s 4 +GIn@|k 4 +GAn`~c 4 +G_lp~c 4 +G@zP~c 4 +G_lvfw 4 +GsDjr{ 4 +GpH]r{ 4 +GqLcz{ 4 +G_lvf{ 4 +GpHY~o 4 +G_ltrk 4 +GCxrns 4 +Gi_x|{ 4 +GX`Y|{ 4 +GpHY|{ 4 +GpHY~{ 4 +GRrDzw 4 +GCxvnw 4 +GSprz{ 4 +GRrDz{ 4 +GCxr~k 4 +GDVb~[ 4 +GBffZ{ 4 +G@vfn{ 4 +GA]t~W 4 +G_mzrk 4 +G_lr~w 4 +G_ltz{ 4 +G_lr~{ 4 +G@lv]w 4 +GGur~w 4 +GGur|{ 4 +G@zTz{ 4 +GGur~{ 4 +G_lv~w 4 +G_lv~{ 4 +G}_xq[ 4 +Grdcz[ 4 +GrEZZ[ 4 +GiNLh{ 4 +GIM|u[ 4 +GcLzt[ 4 +GsPzp{ 4 +GBM^^W 4 +GhJ]p{ 4 +GEK~^W 4 +GaY|rk 4 +GEw~Nk 4 +Godzr{ 4 +GImuZ{ 4 +GamrZ{ 4 +GSpzvk 4 +Go\s~{ 4 +G_~trk 4 +G@vvvw 4 +Gsdjz{ 4 +GFY^^{ 4 +GWN^u{ 4 +GSpzz{ 4 +Gk`z|{ 4 +GIq|~{ 4 +G_l~vk 4 +GC^vv[ 4 +GCxz~k 4 +GCx~n{ 4 +G@vv~{ 4 +GxIYy{ 4 +GJZc{{ 4 +GDX\~W 4 +G_]vjw 4 +GCdz~o 4 +GChz~o 4 +GFdj^[ 4 +GBd~V[ 4 +GFXk|{ 4 +GC|rn[ 4 +GC|rn{ 4 +GC|vj{ 4 +GIp||{ 4 +GC\~vk 4 +G_]z~k 4 +GC^r~{ 4 +GDdz~[ 4 +GBfj|{ 4 +GCuzz{ 4 +GC\|~{ 4 +GC\|~[ 4 +G_mzz{ 4 +G_lz~{ 4 +GC\~~{ 4 +GbY\Zk 4 +GpYZi{ 4 +GFdj\[ 4 +GBUl~W 4 +GBMn]w 4 +GJR\t[ 4 +GBY\~W 4 +GELl~W 4 +GiI\zw 4 +GCUz~o 4 +GAh|~o 4 +G@U~^o 4 +GFUj^[ 4 +GBN^V[ 4 +GEL~V[ 4 +GA^tt{ 4 +GExp|{ 4 +GA}rn[ 4 +GDxZn{ 4 +GFLm~[ 4 +GIY|}{ 4 +GE[~n[ 4 +G@l~m{ 4 +G@^u~{ 4 +GDx^j{ 4 +GaY|z{ 4 +GAyz~k 4 +GENj~{ 4 +G@uz~k 4 +G@l}~{ 4 +GAl~~{ 4 +Glj@y{ 4 +GjQkx{ 4 +Grr@x{ 4 +GpJZq{ 4 +GiJ\p{ 4 +G_]~bk 4 +GAM~^o 4 +G_Mz~o 4 +G@Nm~o 4 +G_}rnk 4 +GCx~fk 4 +G_nrr{ 4 +GG~Tj{ 4 +G_}rn{ 4 +GG^\~k 4 +G@l~]{ 4 +GGuz~{ 4 +G_l~~{ 4 +GFNn]{ 4 +G@vv~w 4 +GC~rz{ 4 +GA~tz{ 4 +G_~tz{ 4 +GC~r~{ 4 +G@v~~{ 4 +GDYi~c 4 +GDUj^c 4 +GPZQ{{ 4 +GDTl^c 4 +GBej]k 4 +GDhZ^k 4 +GPX[}{ 4 +GBiZ^k 4 +GBYk}{ 4 +GHiY~k 4 +GDTl^{ 4 +G@nRvK 4 +GodXz[ 4 +GXFKy{ 4 +GBfLZk 3 +GEgz^k 4 +GWMY}{ 4 +GEMj]{ 4 +GWMY~k 4 +GEMj^{ 4 +GiIX~o 4 +GAnbls 4 +G@zRtk 4 +GBfb\s 4 +G`Ttt[ 4 +GhQ[x{ 4 +GBVd\s 4 +G@vbns 4 +GiIX|{ 4 +GAxtns 4 +GLDm\{ 4 +GXFI|{ 4 +GMDl\{ 4 +GiIX~{ 4 +GDXk~c 4 +GDdj^c 4 +GoKy}[ 4 +GHZS{{ 4 +GBMu][ 4 +GBh\^k 4 +GPhY}{ 4 +GDXk}{ 4 +GPhY~k 4 +GDdj^{ 4 +G@luvK 4 +G@nRnS 4 +GBg}^k 4 +GPpX~k 4 +GPL]]{ 4 +GDLm]{ 4 +GPL]^k 4 +GDLm^{ 4 +GJQk~o 4 +GLeaz[ 4 +GHNU[{ 4 +GBNe[{ 4 +G@^ens 4 +GJQk|{ 4 +GJFL\{ 4 +GHNU\{ 4 +GJQk~{ 4 +GELn^w 4 +GgS||{ 4 +GQXt}{ 4 +GgS|~{ 4 +GBNN^w 4 +GIUl|{ 4 +GIdl~{ 4 +GA]t~w 4 +GHemz{ 4 +GHY]|{ 4 +GG]u|{ 4 +GBiZ~{ 4 +GEW|~w 4 +GWS}|{ 4 +GHo}|{ 4 +GDX\}{ 4 +G@xu|{ 4 +G@^T~{ 4 +G@l}vK 4 +GAlv^w 4 +GAxt~k 4 +GBVd~[ 4 +GGtt|{ 4 +GAlv^{ 4 +GELn~w 4 +GELn~{ 4 +GHM]~w 4 +GPL]}{ 4 +GQK~]{ 4 +GHM]~{ 4 +G@lu~w 4 +GHc}~[ 4 +GDW}}{ 4 +G@lv]{ 4 +G@lu~{ 4 +G@lv~w 4 +G@lv~{ 4 +GReZZ[ 5 +GIUl|w 4 +GJQ|u[ 4 +GMK}^[ 5 +GIU|t{ 5 +GImq~[ 4 +GMox|{ 5 +GMox~{ 5 +G@^vvw 5 +GhT\|{ 5 +GMK}~[ 5 +G@nr~s 5 +GXL]~{ 5 +GSdzz{ 5 +GPL}}{ 5 +GHN]~{ 5 +G@l~vk 5 +G@l~n{ 5 +G@^v~{ 5 +GUKz][ 5 +GFMj][ 4 +GBM}^S 4 +G@l}^c 4 +GXMY}{ 5 +GPL}u{ 5 +G@l~e{ 4 +G@l~f{ 5 +G@l~~{ 5 +GXL}}{ 5 +GHN~u{ 5 +G@l~~w 5 +G@~r~{ 5 +G@^~~{ 5 +GreZZ[ 5 +Gs\sz[ 4 +Glgyy{ 5 +GFzax{ 4 +G_~v`{ 4 +G@N~vo 5 +GXN]u{ 5 +G@~ve{ 4 +G]oxz{ 5 +G@~vf{ 5 +G@~vvk 5 +GXN]}{ 5 +G@~vn{ 5 +G@~~vk 5 +G@~v~{ 5 +G@~~~{ 5 +GP\s}{ 5 +GDhzu{ 4 +GDhzv{ 5 +Gg]\jk 4 +GIN\vK 4 +GML\^[ 5 +GMUh|{ 5 +GIM}v[ 4 +GMhX~{ 5 +GEnbzw 4 +GIM~vw 5 +GbX\|{ 5 +GBY|~s 5 +GE\t~[ 4 +GRW}~{ 5 +GBY|}{ 4 +GPdz}{ 5 +GDhz~{ 5 +GBZ\~[ 4 +GBZ\~{ 4 +GMS|~[ 5 +GIN\~{ 5 +GIM~~{ 5 +GjX\|{ 5 +GIN~t{ 5 +GIN~v{ 5 +GIN~~{ 5 +GoSszW 3 +GoLP}W 3 +G`iiqk 4 +GDZ@}w 3 +GWdP}w 4 +GPV@}w 4 +GoSsz[ 4 +GqGZ[{ 4 +G`dcz[ 4 +GDZ@~w 4 +GDZ@}{ 4 +G[OX}{ 4 +GPV@}{ 4 +GDZ@~{ 4 +G`qipk 3 +GgMP}w 4 +G`YP}w 3 +GoLP}[ 4 +GqG\Y{ 4 +G`deX{ 3 +G`hP~w 4 +GpOX}{ 4 +G`YP}{ 4 +G`hP~{ 4 +G`iZQk 4 +GRQH}w 4 +GqG]X{ 4 +G`hSz[ 4 +GSXP~w 4 +GRQH}{ 4 +GSXP~{ 4 +GDZDzw 4 +G`ddzw 4 +GSXTzw 4 +GoSr~w 4 +GDZDz{ 4 +G`ddz{ 4 +GSXTz{ 4 +GoSr~{ 4 +GoSv~w 4 +GoSv~{ 4 +GT`zQs 4 +G`YZno 4 +GDZcy{ 4 +G`hZtk 4 +GRdcz[ 4 +GQXs~s 4 +GYFH|{ 4 +GHdml{ 4 +GgL\l{ 4 +GIh\l{ 4 +GDZa~s 4 +G`YZn{ 4 +GThiqk 3 +GTTa|[ 4 +GpDi~o 4 +GpDi{{ 4 +GPrQx{ 4 +GqIYx{ 4 +G`Uli{ 4 +G`Yq{{ 4 +G`h\rk 4 +GbJH{{ 4 +GRddY{ 4 +GBja|s 4 +GKpp~s 4 +GkHX|{ 4 +GdHZ\{ 4 +GL`i|{ 4 +GY`X|{ 4 +GSXq~s 4 +GpDi~{ 4 +G`iZrg 4 +G_mrrg 4 +G@lveW 4 +G`Mi~_ 4 +GDYjmo 4 +GEjbrw 4 +G`dnbw 4 +GBr`|s 4 +GcXp|s 4 +GsXPx{ 4 +GDZ`}s 4 +G`hX~c 4 +GQdh~c 4 +G`dnfw 4 +GsSrZ{ 4 +GoLur{ 4 +G`dnf{ 4 +G`iiy{ 4 +G`Xs{{ 4 +GQK}^k 4 +GQK}]{ 4 +GQox~k 4 +GQK}^{ 4 +G`Mi~c 4 +GPUmi{ 4 +G`qix{ 3 +G`Mi~k 4 +G`hX~k 4 +G`W{}{ 4 +G`Mi~{ 4 +GBir]s 4 +G`Ujk{ 4 +G`iZY{ 4 +GQMZ^k 4 +GKhX~k 4 +GQW{}{ 4 +GQMZ^{ 4 +GsXTzw 4 +GQo~nw 4 +GEjbz{ 4 +GBrdz{ 4 +GcXtz{ 4 +GsXTz{ 4 +GDZe~{ 4 +G`dj~w 4 +GQo|z{ 4 +G`dlz{ 4 +GQdlz{ 4 +G`dj~{ 4 +G`dn~w 4 +G`dn~{ 4 +GBZT\s 4 +GINT^o 4 +GBZT[{ 4 +GINT[{ 4 +GINc{{ 4 +GINc~s 4 +GINT\{ 4 +GINT^{ 4 +GIM^^w 4 +GBpl|{ 4 +GEXt~[ 4 +GINe|{ 4 +GBpl~{ 4 +GIc|~w 4 +GHU^\{ 4 +GKS~\{ 4 +G`d\z{ 4 +GIg}|{ 4 +GPUZ~{ 4 +GKS~^w 4 +G`X\|{ 4 +G`X\~{ 4 +GKS~~w 4 +GKS~~{ 4 +G_lu`[ 3 +GsXPW{ 4 +Go]ag{ 3 +Go[si[ 4 +GqYPW{ 4 +Gr_YX[ 4 +GkY_w{ 4 +GkSsX[ 4 +G`]RK[ 4 +Go\Pk[ 4 +GDxRK{ 4 +GlDH[{ 4 +GMop[{ 4 +GMgZK{ 4 +GkSp[{ 4 +GKozc{ 4 +GdSjK{ 4 +GcLrS{ 4 +GoSzc{ 4 +G]Og|{ 4 +GYUHl{ 4 +GiEht{ 4 +GJQkt{ 4 +GoSzf{ 4 +GkcqX[ 4 +Gokqi[ 4 +GqopW{ 4 +G`lak[ 4 +GosrG{ 3 +GdLJK{ 4 +GkCzS{ 4 +GEhrS{ 4 +GMhP[{ 4 +GKxPk{ 4 +GqSp[{ 4 +GZQG|{ 4 +Gj_g|{ 4 +GqSp^{ 4 +GC]r^_ 4 +G`h\rg 4 +GDTl^_ 4 +G`L\vG 4 +GDhr]o 4 +G@luvG 4 +G@l~Ec 4 +GcNbrw 4 +GoS~bw 4 +Ggc~bw 4 +GIi^bw 4 +GK^@|k 4 +GdZ@x{ 4 +Gahp|s 4 +G`ZP|s 4 +GIjP|s 4 +G`Up~S 4 +GSXX~c 4 +GBjH~c 4 +GD^@~K 4 +GoS~fw 4 +G{Ciz{ 4 +GoS~b{ 4 +GpO}r{ 4 +GwSsz{ 4 +GoS~f{ 4 +G`Urt[ 4 +GqHX~o 4 +GD^Dj[ 4 +GDZR\s 4 +GpFHy{ 4 +GIjP{{ 4 +GDZUX{ 4 +GDpr^s 4 +GkDh|{ 4 +G[Di|{ 4 +GqHX~{ 4 +GhFH~o 4 +GDnBj[ 4 +GPUuY{ 3 +GBjJk{ 4 +GHps~s 4 +GhFH|{ 4 +GIdt\{ 4 +GPVa~s 4 +GYD\\{ 4 +GHdu\{ 4 +GhFH~{ 4 +G`X\tk 4 +GaMjno 4 +G`Vcx{ 4 +G`]Rl[ 4 +GaMjk{ 4 +GD^DZk 4 +GBjR\s 4 +GBhmk{ 4 +GEXt^s 4 +GIjP|{ 4 +GhDk|{ 4 +GHfa~s 4 +GMH\\{ 4 +GHfa|{ 4 +GaMjn{ 4 +GBYt]s 4 +GSLi~k 4 +GIiX~k 4 +GSWy}{ 4 +GIMk~{ 4 +GdZDzw 4 +GBqnnw 4 +GcNbz{ 4 +GKpr|{ 4 +GdZDz{ 4 +GPpuz{ 4 +G`Vdz{ 4 +GKVdz{ 4 +G`Vd~{ 4 +GoSz~w 4 +GIiZ|{ 4 +GoSz~{ 4 +Ggcz~w 4 +Ggcz|{ 4 +GBqj|{ 4 +GWd\z{ 4 +Ggcz~{ 4 +GIiZ~w 4 +GIiZ~{ 4 +GoS~~w 4 +GoS~~{ 4 +G`iZzw 4 +GpFJzw 4 +GKT\|w 4 +Gbcz^[ 4 +GQ\s~[ 4 +GbW{|{ 4 +GRX[|{ 4 +Gc\p~[ 4 +Gc\p~{ 4 +GrhSz[ 4 +GxEiy{ 4 +G`]^Jk 4 +GxEZY{ 4 +GpNRY{ 4 +GQMzu[ 4 +G`L\~W 4 +GKL\~W 4 +GpFjq{ 4 +GQK~]w 4 +G`Z\rk 4 +GQw}nk 4 +G`fjr{ 4 +G`urZ{ 4 +GQqzvk 4 +GQlu^{ 4 +GjFLX{ 4 +G`dlzw 4 +GKS}|w 4 +GhFLzw 4 +GRL]^[ 4 +GULZ^[ 4 +GbUh|{ 4 +GQL}v[ 4 +GUTh|{ 4 +Galp~[ 4 +GdSz^{ 4 +GhM]Zk 4 +GReiz[ 4 +GQdlzw 4 +GIiZ|w 4 +GaMnjw 4 +GJM]^[ 4 +GMcz^[ 4 +GJUk|{ 4 +GKlq~[ 4 +GJo{|{ 4 +GIls~[ 4 +GKszn{ 4 +GPV^vw 4 +GUL^^[ 4 +GbM^^{ 4 +Gc[~j{ 4 +GKpz|{ 4 +G`dz|{ 4 +GQdz|{ 4 +GcYzz{ 4 +GcLz~{ 4 +GKl^n[ 4 +GQYz}{ 4 +GqJ\z{ 4 +GQqz~{ 4 +GdS~Z{ 4 +GPUz}{ 4 +G`Uz~{ 4 +GKs~j{ 4 +GKUz~{ 4 +GW]^m{ 4 +GcL~v[ 4 +GEYz~[ 4 +GHrZ|{ 4 +GPrZz{ 4 +GHr\~{ 4 +GPV^~{ 4 +GP]q}{ 4 +GDlr]{ 4 +GDlr^{ 4 +GDvrZs 4 +GPT~vw 4 +GMTl|{ 4 +GbL^\{ 4 +GK\^l{ 4 +GBiz~s 4 +GE\nl{ 4 +GP\^n{ 4 +GDYz~{ 4 +GPT~~{ 4 +GrEjY{ 4 +G{hQx{ 4 +GpZPy{ 4 +GiNTX{ 4 +GpYqy{ 4 +Guhax{ 4 +GyIYx{ 4 +GpFZZs 4 +GcKz~W 4 +GIM\~W 4 +GcL|r[ 4 +GgN\rk 4 +GhFmp{ 4 +GpRXzs 4 +GDxmnk 4 +GPx]nk 4 +Goszj{ 4 +GPp}vk 4 +Ggupz{ 4 +GIutZ{ 4 +Ggezr{ 4 +Golq~{ 4 +G`Y^jw 4 +GJZT[{ 4 +GD]r][ 4 +GDXl}w 4 +GEltZ[ 4 +GBYl}w 4 +GCuzrk 4 +GBlu^[ 4 +GElr^[ 4 +GEX|t{ 4 +GElr^{ 4 +GBZ\vK 4 +GqIZzw 4 +GEmrZ[ 4 +GD\t][ 4 +GCurzw 4 +GBur^[ 4 +GE\t^[ 4 +GBZ\t{ 4 +GEhzv{ 4 +Gourzw 4 +GEjzrs 4 +GDZ^vw 4 +GWlu}{ 4 +GEh~r{ 4 +GDZ^v{ 4 +GDtvZ{ 4 +GDYz}{ 4 +GEYz~{ 4 +GDpz|{ 4 +GEhz~{ 4 +GDZ^~{ 4 +GDZ~vs 4 +G`V~t{ 4 +GHf~r{ 4 +GDZ~u{ 4 +GDZ~v{ 4 +GDZ~~{ 4 +GRY]~w 5 +GdhZz{ 5 +Go\u|{ 4 +GLh]z{ 5 +Gc\tz{ 5 +Gqdlz{ 5 +GRY]~{ 5 +GThzq{ 4 +GS\r~w 5 +GS\tz{ 5 +GS\r~{ 5 +GS\v~w 5 +GS\v~{ 5 +GqK{z[ 5 +GUMZZ[ 5 +GdLZ\[ 4 +GMMZ\[ 4 +GXeYy{ 5 +GkKy~[ 5 +GYMY~[ 4 +Gicx|{ 5 +GYS{|{ 5 +GiK{~[ 4 +GkSx~{ 5 +GRX\~w 4 +GLiZz{ 4 +GbW|}{ 4 +GK\u|{ 4 +GMLm|{ 4 +GRX\~{ 4 +GiK|~w 5 +G`\t|{ 5 +GiK|~{ 5 +GXT\~w 5 +GhL\}{ 5 +GDnbz{ 5 +GFXm|{ 4 +GXT\~{ 5 +GiK~~w 5 +GiK~~{ 5 +GsSzzw 5 +Gdhzq{ 5 +G`mrzw 5 +GrW{}{ 5 +Gs\pz{ 5 +Gdhzu{ 5 +Gdhzv{ 5 +GBzvvw 5 +GliZz{ 5 +Gq\t~{ 5 +GbY|~s 5 +GsLzz{ 5 +Gdhz~{ 5 +GBzv~{ 5 +Gkmqz[ 4 +Gmoxx{ 5 +G{Ky}[ 5 +GwvPx{ 4 +GLjYzs 4 +Gdpzp{ 4 +GDhz~o 5 +GMv`x{ 5 +Go|rk{ 4 +G\hY}{ 5 +GRY}u{ 4 +G\hYz{ 5 +G[dzr{ 5 +GVXk}{ 5 +GS\~f{ 5 +GInt~s 5 +GbY|}{ 5 +GT\v]{ 5 +GIm~n{ 5 +GLhz}{ 5 +GS\z~{ 5 +GIm~~{ 5 +GiN~t{ 5 +GIm~~w 5 +GI~t~{ 5 +GBz~~{ 5 +Gq\tz{ 5 +GiL||{ 5 +GiMz~{ 5 +GInr|{ 5 +GI]||{ 5 +GI^t|{ 5 +GImz~{ 5 +Gi[~l{ 5 +GJZ\~{ 5 +GBx~~{ 5 +GXdz}{ 5 +Gamzz{ 5 +GPtz~{ 5 +GYNZ|{ 5 +GImz}{ 5 +GJNm|{ 5 +GHuz~{ 5 +GD\|~[ 5 +GDlz}{ 4 +GDlz~{ 5 +GD\~~{ 5 +Gimzz{ 5 +GBz~r{ 5 +GFyzz{ 5 +GFxz~{ 5 +GB^~~{ 5 +G}hHg{ 4 +G{LP}[ 4 +GrddY{ 4 +GqMZj[ 5 +GpUji{ 4 +GRMZ][ 5 +G[pX~k 4 +GqczZ{ 5 +GbejZ{ 5 +GRqZ^k 5 +Gqdh~{ 5 +G}h_w{ 4 +GsXXzk 4 +GiMkzk 4 +GiK|[{ 4 +GXqY~k 4 +GsXXz{ 4 +GiiXz{ 4 +GsXX~{ 4 +GXd]~w 5 +GsSzz{ 5 +GkS|z{ 5 +GsSz~[ 4 +GXd]~{ 5 +GP^U~w 5 +GkL\~[ 5 +Gclrz{ 5 +Gqo|z{ 5 +GdX\z{ 5 +GLp\z{ 5 +Gg]u|{ 4 +GdX\~{ 5 +G[Sz~w 5 +GD^dz{ 5 +G[Sz~{ 5 +Gouzrk 4 +GD^e~w 4 +GDxuz{ 4 +GEnbz{ 4 +GFhm~{ 4 +GD^f~w 5 +GD^f~{ 5 +GrfHz[ 5 +Gqlsz[ 5 +Gloxy{ 5 +Gldhy{ 5 +Gsxqx{ 4 +GMnax{ 5 +Go}ri{ 4 +G@l~vg 5 +GDzqzs 4 +G\NI}{ 5 +GP^uu{ 5 +G]hXz{ 5 +GD^vU{ 4 +GD^vV{ 5 +GBzt~s 5 +GhU|}{ 5 +GDx~n{ 5 +GPvr}{ 5 +Gclzz{ 5 +GP^u}{ 5 +Gclz~{ 5 +GBnv^s 5 +GLNj}{ 5 +GD^v]{ 4 +GD^v^{ 5 +GHu~~{ 5 +GFZm|{ 4 +Gouzz{ 4 +GEl~^{ 4 +GBuz~[ 4 +GElz~[ 4 +GElz~{ 4 +GEl~~{ 4 +GFl~^[ 5 +GD^~v[ 5 +GEl~~w 4 +GD^~v{ 5 +GD^~~{ 5 +Gi]||{ 5 +Gi^t|{ 5 +Gimz~{ 5 +GFx~~{ 5 +GB~~~{ 5 +Gs\zrk 5 +Gthzq{ 5 +Gs\r~w 5 +Gs\rz{ 5 +GFzdz{ 5 +Gs\tz{ 5 +Gs\r~{ 5 +Gs\v~w 5 +Gs\v~{ 5 +GDZ~vo 4 +G`N~vo 5 +Gdhz~o 5 +G@~~fc 5 +G}oxz{ 5 +G{dzr{ 5 +G}ox~{ 5 +Gvzax{ 5 +Gfzdz{ 5 +GrZ\z{ 5 +G]qz~{ 5 +Gs^rz{ 5 +Gs\zz{ 5 +Gs\z~{ 5 +Gs\~~{ 5 +Gs~rz{ 5 +GF~vZ{ 5 +GF~v^{ 5 +GFz~~{ 5 +GF~~~{ 5 +GTPH}w 3 +G`osz[ 3 +GTPH~w 3 +GTPH}{ 3 +GTPH~{ 3 +G`rHpk 3 +G`N@}w 3 +GwCX}w 4 +G`ouX{ 3 +GwCX}[ 3 +G`N@~w 4 +G`N@}{ 3 +GwCX}{ 4 +G`N@~{ 4 +GTPLzw 3 +G`NDzw 4 +GwCZ~w 4 +GTPLz{ 3 +G`NDz{ 4 +GwCZ~{ 4 +GwC^~w 4 +GwC^~{ 4 +GpEiy{ 4 +GajPx{ 4 +G`MZvK 4 +Gagx~k 4 +GoKy}{ 4 +GcKz]{ 4 +GoLX~k 4 +GaK|^{ 4 +GcKy~[ 4 +GKdX~[ 4 +GKdX~{ 4 +GaK~^w 4 +GWT\|{ 4 +GgK}~{ 4 +GQS|~w 4 +GHe^Z{ 4 +GKW}|{ 4 +GPT\~{ 4 +GQL^^w 4 +GHpu|{ 4 +GKTl|{ 4 +GKXu|{ 4 +G`S~^{ 4 +GaK|~w 4 +GgK}|{ 4 +G`L\}{ 4 +G`W}|{ 4 +G`MZ~{ 4 +G`L^~w 4 +G`L^~{ 4 +GoxPg{ 3 +Gospi[ 4 +GqhPW{ 3 +GwN?w{ 3 +G`xPk[ 3 +G{CYX[ 4 +G`^@k[ 3 +GwSo{[ 3 +GElbK{ 4 +GKsrK{ 3 +GmCh[{ 4 +GfOh[{ 4 +GeKjK{ 3 +GKdrS{ 3 +GwDXs{ 4 +GYU_|{ 4 +GJYS\{ 3 +GyCX\{ 4 +GwDXv{ 4 +GTUaz[ 3 +G`NJno 4 +G`Ncy{ 4 +GRhSz[ 3 +GKXs~s 4 +GJbH|{ 4 +GJ`\\{ 3 +G`NJl{ 4 +G`Na~s 4 +G`NJn{ 4 +GTlai[ 3 +G`Na|s 4 +GRhTY{ 3 +GwDX~o 4 +GQjQx{ 3 +GwEYx{ 4 +G`NUX{ 3 +G`pp~s 4 +GwDX|{ 4 +GKdr\{ 3 +GwDX~{ 4 +GDdj^_ 4 +G_mzbc 4 +GDYr]o 3 +G`K~Mo 4 +GKfbrw 4 +G`o~bw 4 +GJbH|s 4 +G`^@|k 4 +GtPHx{ 4 +GQhX~c 3 +G`NH~c 4 +G`o~fw 4 +G{CZZ{ 4 +GwD\r{ 4 +G`o~f{ 4 +GDYr]s 3 +G`Lmk{ 3 +GQMi~k 3 +GQhX~k 3 +GQMi}{ 3 +GQMi~{ 3 +G`K~Ms 4 +G`rHx{ 3 +G`rPx{ 3 +G`K}^k 4 +G`K}]{ 3 +G`ox~k 4 +G`K}^{ 4 +GtPLzw 4 +G`o~nw 4 +GKfbz{ 4 +GQVdz{ 4 +G`pr|{ 4 +GtPLz{ 4 +G`Ne~{ 4 +G`oz~w 4 +GQh\z{ 3 +G`o|z{ 4 +G`oz~{ 4 +G`o~~w 4 +G`o~~{ 4 +G`K}~w 4 +G`K}}{ 4 +G`K~]{ 4 +G`K}~{ 4 +G`K~~w 5 +G`K~~{ 5 +GKejzw 4 +GKeZzw 4 +GwEZzw 4 +GJeZ^[ 4 +GJY[|{ 4 +GK\s~[ 4 +GKdzv{ 4 +G}G]X{ 4 +GpNay{ 4 +G`^Ljk 4 +G{dax{ 4 +GQL\~W 4 +G`Mzu[ 4 +GwFXzs 4 +G`K~]w 4 +G`Nmrk 4 +G`w}nk 4 +G`qzr{ 4 +G`zPz{ 4 +G`qzvk 4 +G`lu^{ 4 +G`N^vw 4 +GwK}}{ 4 +GKd~r{ 4 +G`N^v{ 4 +GQT||{ 4 +G`pz|{ 4 +GKdz~{ 4 +GbK~]{ 4 +GeK~Z{ 4 +G`Mz}{ 4 +G`NZ~{ 4 +GKd~v[ 4 +GKZ\z{ 4 +GwF\z{ 4 +G`qz~{ 4 +G`N^~{ 4 +GJY[|[ 4 +GxFHy{ 4 +GeK|Z[ 4 +GRiiy{ 4 +GQh\zw 3 +GKd\zw 4 +G`NNjw 4 +GeKz^[ 4 +GbK}^[ 4 +GQT|t{ 4 +GQlq~[ 4 +Gbox|{ 4 +GeKz^{ 4 +GqKx}[ 5 +GdKz][ 4 +GhK{}{ 5 +G`Mzu{ 4 +G`Mzv{ 5 +G`L~vw 5 +GhK}}{ 5 +G`Mz~s 5 +GhK}~{ 5 +G`Mz~{ 5 +G`L~~{ 5 +G`N~vs 5 +G`N~u{ 4 +G`N~r{ 5 +G`N~v{ 5 +G`N~~{ 5 +G~`HW{ 4 +GrhTY{ 4 +GtPix{ 4 +GhNUX{ 4 +GJMk}[ 4 +GTpi~k 4 +GwdXz{ 4 +GJqkz{ 4 +GwdX~{ 4 +G`uvZw 4 +GTX]~w 4 +Gegzz{ 4 +GQlu~[ 4 +GeW|z{ 4 +GKttz{ 4 +GwS}|{ 4 +Gqh\z{ 4 +GTX]~{ 4 +GTlrY{ 4 +GTXZ~w 4 +GTX\z{ 4 +GTXZ~{ 4 +GTX^~w 4 +GTX^~{ 4 +GhY[zk 4 +G{PXx{ 4 +G{Ssz[ 4 +GRK}][ 5 +GpUrY{ 4 +Gqox~k 4 +GheZZ{ 5 +Gqoxz{ 5 +GRrH~k 4 +Gqox~{ 5 +GdMZZ[ 4 +GqK|Y{ 4 +GqKy~[ 4 +GqSx|{ 5 +GqSx~{ 5 +G}hPW{ 4 +G}GZ[{ 4 +GrotY{ 4 +GqNax{ 4 +GRMi}[ 4 +GppX~k 4 +Gqgyz{ 4 +GRqi~k 4 +GhqXz{ 4 +GqhX~{ 4 +GpL]~w 5 +Gkczz{ 5 +GqK~]{ 5 +GpL]z{ 5 +GkK}z{ 5 +GpL]~{ 5 +GpLZ~w 5 +G`ltz{ 5 +GpLZ~{ 5 +G`zTzw 4 +G`lu~w 4 +GqL\~[ 4 +GdW}z{ 4 +GKurz{ 4 +G`lv]{ 4 +GdW}~{ 4 +G`lv~w 5 +G`lv~{ 5 +GfdjX{ 4 +GKfzrs 4 +GKuzrk 4 +Gktpx{ 4 +G\TZ\{ 4 +GxLY|{ 5 +GySx~{ 5 +GR]~Ms 5 +G`^vvw 5 +Gdnbz{ 5 +GxL]~{ 5 +GJnL~k 5 +G[dzz{ 5 +GTpz~{ 5 +G`^v~{ 5 +GxL]z{ 5 +GYT||{ 5 +GjK~]{ 5 +GhNZ~{ 5 +GLfjz{ 4 +GJjZ|{ 4 +GQmzz{ 4 +GQlz~{ 4 +GpLz}{ 5 +GhNZ|{ 5 +G`mzz{ 5 +G`lz~{ 5 +G`\~~{ 5 +Gkyqx{ 4 +GqzPx{ 4 +Gdthzk 4 +GLrXzs 4 +Gwuqx{ 4 +GDYz~o 4 +Gfphx{ 4 +GKvpzs 4 +GQ}rm[ 4 +G\YY}{ 4 +GTX}u{ 4 +G\diz{ 4 +GTxq}{ 4 +GQl~f{ 4 +GKnr~s 4 +GRh}}{ 4 +GRlu~[ 4 +GKl~n{ 4 +GQl~~{ 4 +G{TXx{ 5 +G{dXz[ 4 +GrrHx{ 5 +GuSxz[ 5 +GqltY{ 4 +Gfhhy{ 4 +G`Mz~o 5 +G`}rm[ 4 +GxMY}{ 5 +GpL}u{ 5 +G{Sxz{ 5 +G`l~e{ 4 +G`l~f{ 5 +G`nr~s 5 +GpL}}{ 5 +G`l~vk 5 +G`l~n{ 5 +G`vtz{ 4 +GbNl}{ 4 +GQlz}{ 4 +GKuzz{ 4 +G`z\z{ 4 +GKuz~{ 4 +G`l~~{ 5 +GxN]z{ 5 +GhN~u{ 5 +G`~rz{ 5 +GQl~~w 4 +G`l~~w 5 +G`~r~{ 5 +G`^~~{ 5 +G~rHx{ 5 +G^rLz{ 5 +G{dzz{ 5 +G{dz~{ 5 +G`~~vk 5 +G`~v~{ 5 +G`~~~{ 5 +GC~rrk 4 +Gddzr[ 5 +Gkdzp{ 5 +GKurzw 4 +G]XX|{ 5 +G[\q|{ 5 +GrWy|{ 5 +GqLzv{ 5 +GqluX{ 5 +GkurX{ 4 +Grqix{ 5 +GdlrY{ 5 +G`mzrk 5 +GtLi}{ 5 +GtXXz{ 5 +Gdlr]{ 5 +Gdlr^{ 5 +GT\v]w 5 +GvSzZ[ 5 +GRl~Ms 5 +GqL~vw 5 +GTzRz{ 5 +GqL~v{ 5 +GqL~r{ 5 +GYdz|{ 5 +GkLz|{ 5 +GqLz~{ 5 +GLnJ~k 5 +GqN\z{ 5 +GRqz~{ 5 +Gbiz~s 5 +GdYz}{ 5 +GdYz~{ 5 +GqL~~{ 5 +G}iZz{ 5 +GqN~r{ 5 +GqN~v{ 5 +GqN~~{ 5 +GtXZzw 5 +GRlu~W 4 +GtlrY{ 5 +G{Sz~w 5 +G{Szz{ 5 +GRvdz{ 5 +Gd^dz{ 5 +GtX\z{ 5 +G{Sz~{ 5 +G{S~~w 5 +G{S~~{ 5 +GD^v^o 5 +G`l~vg 5 +GdYz~o 5 +G}hXz{ 5 +G}hX~{ 5 +G}nax{ 5 +Grvdz{ 5 +Gp^uz{ 5 +GyN\z{ 5 +Grqz~{ 5 +Ghuzz{ 5 +GRl}~[ 5 +Ghuz~{ 5 +Gql~~{ 5 +GjNm|{ 5 +GJm~]{ 5 +GJz\z{ 5 +GJz\~{ 5 +GMlz~{ 5 +GMl~~{ 5 +G{uzz{ 5 +Gq~tz{ 5 +GMn~r{ 5 +GMn~v{ 5 +Gd^~~{ 5 +Gru~Z{ 5 +Gulzz{ 5 +G\^]~{ 5 +GR~v~{ 5 +GR~~~{ 5 +GuYzz{ 5 +Gqmzz{ 5 +Gd\|~[ 5 +GU\|~[ 5 +Gqlz~{ 5 +GMu|z{ 5 +Gdlz~{ 5 +GLlz}{ 5 +GR]}~{ 5 +Gd\~~{ 5 +GT\~~{ 5 +GlnZz{ 5 +Gfyzz{ 5 +GT\~~w 5 +G]lz~{ 5 +GJn~~{ 5 +Gptzz{ 5 +Gdlzz{ 5 +GT\z}{ 5 +Gd\z~{ 5 +GTlzz{ 5 +GT\z~{ 5 +GJ]~~{ 5 +GJ^~~{ 6 +GJ~~~{ 6 +GN~~~{ 6 +G]\z|{ 5 +Gj\||{ 6 +Gr\z~{ 6 +Gtlzz{ 6 +Gt\z~{ 6 +Gr\~~{ 6 +Gt\~~{ 6 +Gr^~~{ 6 +GR~~vk 5 +G}lzz{ 6 +Gt\~~w 6 +G}lz~{ 6 +G~z\z{ 6 +G}l~~{ 6 +Gr~~~{ 6 +G^~~~{ 6 +G~~~~{ 7 diff --git a/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs.graph6 b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs.graph6 new file mode 100644 index 0000000..8848b26 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs.graph6 @@ -0,0 +1,9 @@ +H`?G][} 3 +IoC?GLfFo 3 +I@o?GLNLo 3 +IGC?GK^xo 3 +I`?GGNJMo 3 +I`?GGKzpo 3 +IGC?KLfFo 3 +J`?G?CBKw^? 3 +I`?GONFLo 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/GNP_20_20_0.gr b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/GNP_20_20_0.gr new file mode 100644 index 0000000..5bcf70b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/GNP_20_20_0.gr @@ -0,0 +1,47 @@ +p tw 20 46 +1 13 +1 14 +1 16 +1 19 +1 4 +2 5 +11 15 +11 17 +11 18 +12 14 +12 16 +12 17 +13 18 +15 19 +16 18 +16 19 +17 18 +17 19 +17 20 +18 20 +19 20 +3 19 +3 5 +3 7 +4 13 +4 15 +4 16 +4 18 +4 19 +4 6 +4 9 +5 20 +5 7 +5 9 +6 16 +6 7 +7 15 +7 20 +8 12 +8 13 +8 15 +8 16 +9 18 +9 19 +10 12 +10 17 diff --git a/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/GNP_20_20_0.td b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/GNP_20_20_0.td new file mode 100644 index 0000000..712885c --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/GNP_20_20_0.td @@ -0,0 +1,27 @@ +c width = 6, time = 0.077675 +s td 13 7 20 +b 1 4 7 8 16 17 18 19 +b 2 1 4 8 16 17 18 19 +b 3 1 8 12 16 17 +b 4 1 4 8 13 18 +b 5 3 4 5 7 17 18 19 +b 6 4 5 9 18 19 +b 7 5 7 17 18 19 20 +b 8 4 6 7 16 +b 9 4 7 8 15 17 18 19 +b 10 11 15 17 18 +b 11 12 1 14 +b 12 17 12 10 +b 13 5 2 +2 3 +2 4 +1 2 +5 6 +5 7 +1 5 +1 8 +9 10 +1 9 +3 11 +3 12 +5 13 diff --git a/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/RandomBipartite_25_50_1.gr b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/RandomBipartite_25_50_1.gr new file mode 100644 index 0000000..d362350 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/RandomBipartite_25_50_1.gr @@ -0,0 +1,115 @@ +p tw 75 114 +1 22 +1 71 +2 32 +2 54 +2 62 +2 65 +3 37 +4 20 +4 33 +4 39 +4 67 +4 70 +5 41 +5 73 +6 28 +6 30 +6 39 +7 19 +7 22 +7 28 +7 43 +7 69 +7 70 +9 22 +9 25 +9 49 +9 63 +9 66 +10 23 +10 38 +10 54 +10 65 +10 70 +11 25 +11 28 +11 33 +12 36 +12 39 +12 42 +12 44 +12 47 +12 64 +12 73 +13 27 +13 50 +13 51 +13 54 +13 70 +13 73 +14 41 +14 60 +14 65 +14 72 +14 73 +15 27 +15 32 +15 41 +15 42 +15 52 +15 58 +15 59 +15 62 +15 66 +15 67 +16 19 +16 37 +16 51 +16 54 +16 66 +17 39 +17 58 +17 61 +18 45 +18 70 +24 23 +35 29 +35 39 +35 42 +35 45 +35 47 +35 54 +35 59 +35 65 +35 72 +46 30 +46 31 +46 32 +46 48 +46 50 +46 61 +46 67 +57 25 +57 32 +57 37 +57 39 +57 40 +57 50 +57 54 +57 60 +57 71 +68 41 +68 42 +68 64 +74 21 +74 37 +74 38 +74 39 +74 61 +74 73 +75 25 +75 38 +75 41 +75 42 +75 63 diff --git a/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/RandomBipartite_25_50_1.td b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/RandomBipartite_25_50_1.td new file mode 100644 index 0000000..7644d66 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/RandomBipartite_25_50_1.td @@ -0,0 +1,108 @@ +s td 54 10 75 +b 1 10 15 16 39 46 54 57 70 73 75 +b 2 10 15 39 46 54 57 65 70 73 75 +b 3 2 15 32 46 54 57 65 +b 4 15 35 39 54 57 65 68 70 73 75 +b 5 12 15 35 39 42 68 73 75 +b 6 14 15 35 41 57 65 68 73 75 +b 7 7 11 15 16 22 39 46 57 70 75 +b 8 4 11 15 39 46 67 70 +b 9 6 7 11 28 39 46 +b 10 9 11 15 16 22 25 57 66 75 +b 11 13 15 16 46 50 54 57 70 73 +b 12 10 15 16 39 46 57 73 74 75 +b 13 15 17 39 46 61 74 +b 14 16 37 57 74 +b 15 10 38 74 75 +b 16 11 4 33 +b 17 15 13 27 +b 18 15 2 62 +b 19 16 7 19 +b 20 16 13 51 +b 21 17 15 58 +b 22 35 70 18 +b 23 35 18 45 +b 24 35 12 47 +b 25 35 15 59 +b 26 35 14 72 +b 27 46 6 30 +b 28 57 14 60 +b 29 57 22 1 +b 30 57 1 71 +b 31 68 12 64 +b 32 73 41 5 +b 33 75 9 63 +b 34 37 3 +b 35 4 20 +b 36 74 21 +b 37 10 23 +b 38 23 24 +b 39 35 29 +b 40 46 31 +b 41 12 36 +b 42 57 40 +b 43 7 43 +b 44 12 44 +b 45 46 48 +b 46 9 49 +b 47 15 52 +b 48 7 69 +b 49 8 +b 50 26 +b 51 34 +b 52 53 +b 53 55 +b 54 56 +2 3 +4 5 +4 6 +2 4 +1 2 +7 8 +7 9 +7 10 +1 7 +1 11 +12 13 +12 14 +12 15 +1 12 +8 16 +11 17 +3 18 +7 19 +11 20 +13 21 +4 22 +22 23 +5 24 +4 25 +6 26 +9 27 +6 28 +7 29 +29 30 +5 31 +6 32 +10 33 +14 34 +8 35 +12 36 +1 37 +37 38 +4 39 +1 40 +5 41 +1 42 +7 43 +5 44 +1 45 +10 46 +1 47 +7 48 +1 49 +49 50 +50 51 +51 52 +52 53 +53 54 diff --git a/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/RandomGNM_100_100.gr b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/RandomGNM_100_100.gr new file mode 100644 index 0000000..65f5f15 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/RandomGNM_100_100.gr @@ -0,0 +1,101 @@ +p tw 100 100 +1 8 +1 78 +4 17 +4 31 +4 64 +4 92 +6 47 +7 12 +7 62 +7 64 +8 32 +8 78 +8 82 +9 12 +9 17 +9 49 +10 88 +12 50 +12 54 +13 21 +13 38 +13 93 +15 21 +15 27 +15 62 +16 70 +17 45 +17 63 +17 69 +17 99 +18 42 +18 51 +18 55 +18 58 +18 85 +19 25 +19 33 +19 77 +20 81 +20 89 +22 27 +22 28 +22 82 +22 95 +23 40 +24 86 +26 91 +26 94 +27 55 +27 99 +28 54 +31 80 +34 51 +34 67 +37 56 +37 73 +37 87 +37 94 +38 61 +39 80 +40 73 +40 74 +45 94 +46 40 +46 99 +48 67 +48 92 +49 63 +52 100 +55 85 +56 61 +58 93 +59 63 +59 87 +60 67 +61 92 +62 88 +62 97 +63 66 +63 97 +66 81 +66 100 +67 91 +68 5 +68 15 +68 40 +68 43 +68 95 +72 75 +77 78 +78 83 +79 51 +79 59 +79 95 +83 84 +83 85 +87 88 +90 3 +90 15 +90 69 diff --git a/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/RandomGNM_100_100.td b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/RandomGNM_100_100.td new file mode 100644 index 0000000..49f7319 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/RandomGNM_100_100.td @@ -0,0 +1,173 @@ +c width = 6, time = 0.085690 +s td 86 7 100 +b 1 7 15 17 18 22 37 79 +b 2 7 15 17 18 37 79 92 +b 3 4 7 17 92 +b 4 13 15 18 37 61 92 +b 5 17 18 37 67 79 92 +b 6 18 51 67 79 +b 7 17 37 67 94 +b 8 7 15 17 22 37 63 79 +b 9 7 9 12 17 22 63 +b 10 7 15 37 63 79 87 +b 11 59 63 79 87 +b 12 7 15 62 63 87 +b 13 15 17 18 22 37 79 99 +b 14 15 18 22 27 55 85 99 +b 15 15 22 37 68 79 99 +b 16 37 40 68 99 +b 17 22 68 79 95 +b 18 8 78 85 +b 19 8 22 85 +b 20 7 4 64 +b 21 12 22 28 +b 22 12 28 54 +b 23 13 18 58 +b 24 13 58 93 +b 25 15 13 21 +b 26 15 17 69 +b 27 15 69 90 +b 28 22 8 82 +b 29 40 37 73 +b 30 61 13 38 +b 31 61 37 56 +b 32 63 9 49 +b 33 63 62 97 +b 34 67 51 34 +b 35 67 94 26 +b 36 67 26 91 +b 37 78 8 1 +b 38 85 78 83 +b 39 87 62 88 +b 40 92 67 48 +b 41 94 17 45 +b 42 99 40 46 +b 43 90 3 +b 44 68 5 +b 45 88 10 +b 46 40 23 +b 47 4 31 +b 48 31 80 +b 49 80 39 +b 50 8 32 +b 51 18 42 +b 52 68 43 +b 53 12 50 +b 54 67 60 +b 55 63 66 +b 56 66 81 +b 57 81 20 +b 58 20 89 +b 59 66 100 +b 60 100 52 +b 61 40 74 +b 62 78 77 +b 63 77 19 +b 64 19 25 +b 65 19 33 +b 66 83 84 +b 67 2 +b 68 6 47 +b 69 11 +b 70 14 +b 71 16 70 +b 72 24 86 +b 73 29 +b 74 30 +b 75 35 +b 76 36 +b 77 41 +b 78 44 +b 79 53 +b 80 57 +b 81 65 +b 82 71 +b 83 72 75 +b 84 76 +b 85 96 +b 86 98 +2 3 +2 4 +5 6 +5 7 +2 5 +1 2 +8 9 +10 11 +10 12 +8 10 +1 8 +13 14 +15 16 +15 17 +13 15 +1 13 +19 18 +14 19 +3 20 +9 21 +21 22 +4 23 +23 24 +4 25 +1 26 +26 27 +19 28 +16 29 +4 30 +4 31 +9 32 +12 33 +6 34 +7 35 +35 36 +18 37 +18 38 +12 39 +5 40 +7 41 +16 42 +27 43 +15 44 +39 45 +16 46 +3 47 +47 48 +48 49 +18 50 +1 51 +15 52 +9 53 +5 54 +8 55 +55 56 +56 57 +57 58 +55 59 +59 60 +16 61 +18 62 +62 63 +63 64 +63 65 +38 66 +1 67 +67 68 +68 69 +69 70 +70 71 +71 72 +72 73 +73 74 +74 75 +75 76 +76 77 +77 78 +78 79 +79 80 +80 81 +81 82 +82 83 +83 84 +84 85 +85 86 diff --git a/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/Toroidal6RegularGrid2dGraph_4_6.gr b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/Toroidal6RegularGrid2dGraph_4_6.gr new file mode 100644 index 0000000..13e962a --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/Toroidal6RegularGrid2dGraph_4_6.gr @@ -0,0 +1,73 @@ +p tw 24 72 +1 2 +1 11 +1 17 +1 20 +1 21 +1 22 +2 11 +2 12 +2 13 +2 22 +2 23 +3 4 +3 9 +3 10 +4 5 +4 10 +5 6 +5 10 +5 11 +5 12 +6 7 +6 12 +6 14 +7 8 +7 14 +7 15 +8 9 +8 15 +8 16 +9 10 +9 16 +9 17 +10 11 +10 17 +11 12 +11 17 +12 14 +13 12 +13 14 +13 18 +13 23 +13 24 +14 15 +15 16 +16 17 +18 3 +18 14 +18 15 +18 19 +18 24 +19 3 +19 4 +19 15 +19 16 +19 20 +20 4 +20 16 +20 17 +20 21 +21 4 +21 5 +21 6 +21 22 +22 6 +22 7 +22 23 +23 7 +23 8 +23 24 +24 3 +24 8 +24 9 diff --git a/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/Toroidal6RegularGrid2dGraph_4_6.td b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/Toroidal6RegularGrid2dGraph_4_6.td new file mode 100644 index 0000000..98511fa --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/Toroidal6RegularGrid2dGraph_4_6.td @@ -0,0 +1,25 @@ +c width = 9, time = 0.117840 +s td 12 10 24 +b 1 3 4 7 9 10 13 14 17 20 23 +b 2 4 6 7 10 13 14 17 20 22 23 +b 3 4 6 10 12 13 14 17 20 22 23 +b 4 2 4 6 10 12 13 17 20 22 23 +b 5 2 4 6 10 11 12 17 20 21 22 +b 6 1 2 11 17 20 21 22 +b 7 4 5 6 10 11 12 21 +b 8 3 4 7 9 13 14 17 19 20 23 +b 9 3 7 9 13 14 16 17 19 20 23 +b 10 3 7 9 13 14 15 16 19 23 24 +b 11 7 8 9 15 16 23 24 +b 12 3 13 14 15 18 19 24 +5 6 +5 7 +4 5 +3 4 +2 3 +1 2 +10 11 +10 12 +9 10 +8 9 +1 8 diff --git a/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/dimacs_anna.gr b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/dimacs_anna.gr new file mode 100644 index 0000000..19f3f69 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/dimacs_anna.gr @@ -0,0 +1,261 @@ +p tw 138 260 +1 7 +2 76 +2 72 +51 110 +51 113 +51 111 +36 112 +36 107 +36 83 +56 119 +56 7 +78 107 +79 90 +79 113 +79 83 +82 111 +83 104 +83 112 +83 100 +83 110 +83 111 +83 113 +83 107 +86 110 +87 98 +87 113 +88 110 +89 133 +89 41 +90 113 +98 113 +99 113 +100 107 +101 110 +102 119 +102 121 +102 7 +103 113 +104 107 +108 113 +110 111 +110 113 +111 113 +115 119 +116 123 +116 7 +116 4 +116 119 +118 123 +119 11 +119 20 +119 132 +119 28 +119 137 +119 15 +119 43 +119 130 +119 4 +119 120 +119 18 +119 38 +119 33 +119 42 +119 13 +119 6 +119 44 +119 126 +119 30 +119 27 +119 124 +119 32 +119 134 +119 123 +119 9 +119 19 +119 121 +119 7 +122 19 +123 20 +123 4 +123 124 +123 136 +123 134 +123 7 +124 18 +124 7 +125 31 +130 7 +130 44 +131 136 +132 7 +134 136 +138 35 +4 31 +4 44 +4 7 +5 41 +6 44 +7 29 +7 8 +7 23 +7 28 +7 44 +7 38 +7 19 +7 21 +13 42 +13 44 +19 44 +19 41 +31 44 +31 35 +42 44 +62 72 +62 110 +62 45 +84 48 +84 14 +95 45 +95 103 +95 113 +106 60 +117 119 +117 83 +117 45 +117 77 +117 54 +117 110 +117 67 +117 111 +117 113 +117 87 +117 7 +128 72 +128 83 +128 25 +128 97 +128 48 +128 14 +14 109 +14 58 +14 68 +14 50 +14 90 +14 85 +14 104 +14 36 +14 92 +14 79 +14 25 +14 97 +14 78 +14 48 +14 107 +14 83 +14 21 +14 45 +14 59 +14 64 +14 113 +14 70 +14 72 +25 97 +25 48 +45 57 +45 103 +45 99 +45 83 +45 77 +45 54 +45 67 +45 111 +45 88 +45 86 +45 93 +45 69 +45 49 +45 59 +45 64 +45 113 +45 110 +45 72 +45 46 +45 9 +47 77 +47 72 +47 85 +48 97 +48 83 +48 72 +49 64 +49 77 +49 111 +49 113 +49 110 +49 93 +49 69 +50 72 +52 105 +52 119 +54 113 +54 83 +54 77 +54 110 +54 67 +54 111 +55 72 +57 64 +57 113 +57 110 +57 111 +58 109 +59 110 +59 111 +59 108 +59 98 +59 87 +59 113 +59 64 +64 110 +64 111 +64 98 +64 113 +65 110 +65 111 +65 119 +65 113 +67 113 +67 83 +67 77 +67 110 +67 111 +69 93 +70 111 +70 82 +70 75 +70 72 +71 85 +71 72 +72 77 +72 111 +72 113 +72 90 +72 82 +72 92 +72 75 +72 76 +72 94 +72 85 +72 133 +72 83 +72 21 +72 110 +75 82 +76 94 +77 85 +77 113 +77 83 +77 110 +77 111 diff --git a/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/dimacs_anna.td b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/dimacs_anna.td new file mode 100644 index 0000000..2a96a0b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/tw-solver-bugs/dimacs_anna.td @@ -0,0 +1,221 @@ +c width = 8, time = 0.269917 +s td 110 9 138 +b 1 45 51 62 72 110 111 113 +b 2 14 45 49 72 77 110 111 113 117 +b 3 14 45 72 110 111 113 117 119 +b 4 7 14 44 72 117 119 123 124 130 +b 5 4 7 44 116 119 123 +b 6 7 19 44 72 119 +b 7 7 14 21 72 +b 8 65 110 111 113 119 +b 9 14 45 72 77 83 110 111 113 117 +b 10 14 25 48 72 83 97 128 +b 11 45 54 67 77 83 110 111 113 117 +b 12 14 72 79 83 90 113 +b 13 14 47 72 77 85 +b 14 14 45 49 59 98 110 111 113 117 +b 15 14 45 49 59 64 98 110 111 113 +b 16 45 57 64 110 111 113 +b 17 59 87 98 113 117 +b 18 14 70 72 75 82 111 +b 19 72 76 +b 20 7 102 119 +b 21 14 36 83 107 +b 22 14 83 104 107 +b 23 13 42 44 119 +b 24 45 49 69 93 +b 25 45 95 103 113 +b 26 119 123 134 +b 27 14 14 58 +b 28 14 58 109 +b 29 44 4 31 +b 30 48 14 84 +b 31 72 14 50 +b 32 72 14 92 +b 33 72 19 41 +b 34 72 41 89 +b 35 72 89 133 +b 36 76 72 2 +b 37 76 72 94 +b 38 83 36 112 +b 39 85 72 71 +b 40 107 14 78 +b 41 107 83 100 +b 42 110 45 86 +b 43 110 45 88 +b 44 113 45 99 +b 45 113 59 108 +b 46 119 44 6 +b 47 119 45 9 +b 48 119 7 28 +b 49 119 7 38 +b 50 119 7 56 +b 51 119 102 121 +b 52 119 7 132 +b 53 123 119 20 +b 54 124 119 18 +b 55 134 123 136 +b 56 7 1 +b 57 41 5 +b 58 7 8 +b 59 119 11 +b 60 119 15 +b 61 7 23 +b 62 119 27 +b 63 7 29 +b 64 119 30 +b 65 119 32 +b 66 119 33 +b 67 31 35 +b 68 35 138 +b 69 119 43 +b 70 45 46 +b 71 119 52 +b 72 52 105 +b 73 72 55 +b 74 14 68 +b 75 110 101 +b 76 119 115 +b 77 123 118 +b 78 119 120 +b 79 19 122 +b 80 31 125 +b 81 119 126 +b 82 136 131 +b 83 119 137 +b 84 3 +b 85 10 +b 86 12 +b 87 16 +b 88 17 +b 89 22 +b 90 24 +b 91 26 +b 92 34 +b 93 37 +b 94 39 +b 95 40 +b 96 53 +b 97 60 106 +b 98 61 +b 99 63 +b 100 66 +b 101 73 +b 102 74 +b 103 80 +b 104 81 +b 105 91 +b 106 96 +b 107 114 +b 108 127 +b 109 129 +b 110 135 +4 5 +4 6 +4 7 +3 4 +3 8 +2 3 +9 10 +9 11 +9 12 +2 9 +2 13 +15 16 +14 15 +14 17 +2 14 +2 18 +1 2 +21 22 +1 19 +4 20 +9 21 +4 23 +2 24 +1 25 +4 26 +2 27 +27 28 +5 29 +10 30 +2 31 +2 32 +6 33 +33 34 +34 35 +19 36 +19 37 +21 38 +13 39 +21 40 +21 41 +1 42 +1 43 +1 44 +14 45 +4 46 +3 47 +4 48 +4 49 +4 50 +20 51 +4 52 +4 53 +4 54 +26 55 +4 56 +33 57 +4 58 +3 59 +3 60 +4 61 +3 62 +4 63 +3 64 +3 65 +3 66 +29 67 +67 68 +3 69 +1 70 +3 71 +71 72 +1 73 +2 74 +1 75 +3 76 +4 77 +3 78 +6 79 +29 80 +3 81 +55 82 +3 83 +1 84 +84 85 +85 86 +86 87 +87 88 +88 89 +89 90 +90 91 +91 92 +92 93 +93 94 +94 95 +95 96 +96 97 +97 98 +98 99 +99 100 +100 101 +101 102 +102 103 +103 104 +104 105 +105 106 +106 107 +107 108 +108 109 +109 110 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/empty.gr b/solvers/TCS-Meiji/td-validate-master/test/valid/empty.gr new file mode 100644 index 0000000..a72e348 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/empty.gr @@ -0,0 +1 @@ +p tw 0 0 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/empty.td b/solvers/TCS-Meiji/td-validate-master/test/valid/empty.td new file mode 100644 index 0000000..6a8be87 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/empty.td @@ -0,0 +1 @@ +s td 0 0 0 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/gr-only.gr b/solvers/TCS-Meiji/td-validate-master/test/valid/gr-only.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/gr-only.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/p-num-vertices-larger.gr b/solvers/TCS-Meiji/td-validate-master/test/valid/p-num-vertices-larger.gr new file mode 100644 index 0000000..13b152b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/p-num-vertices-larger.gr @@ -0,0 +1,5 @@ +p tw 6 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/p-num-vertices-larger.td b/solvers/TCS-Meiji/td-validate-master/test/valid/p-num-vertices-larger.td new file mode 100644 index 0000000..021f88f --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/p-num-vertices-larger.td @@ -0,0 +1,6 @@ +s td 3 4 6 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 6 +1 2 +2 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/single-edge.gr b/solvers/TCS-Meiji/td-validate-master/test/valid/single-edge.gr new file mode 100644 index 0000000..1870d43 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/single-edge.gr @@ -0,0 +1,2 @@ +p tw 2 1 +1 2 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/single-edge.td b/solvers/TCS-Meiji/td-validate-master/test/valid/single-edge.td new file mode 100644 index 0000000..480758f --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/single-edge.td @@ -0,0 +1,2 @@ +s td 1 2 2 +b 1 1 2 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/single-vertex.gr b/solvers/TCS-Meiji/td-validate-master/test/valid/single-vertex.gr new file mode 100644 index 0000000..f53426c --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/single-vertex.gr @@ -0,0 +1 @@ +p tw 1 0 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/single-vertex.td b/solvers/TCS-Meiji/td-validate-master/test/valid/single-vertex.td new file mode 100644 index 0000000..b4c5489 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/single-vertex.td @@ -0,0 +1,2 @@ +s td 1 1 1 +b 1 1 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/two-vertices-2.gr b/solvers/TCS-Meiji/td-validate-master/test/valid/two-vertices-2.gr new file mode 100644 index 0000000..114d4a9 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/two-vertices-2.gr @@ -0,0 +1 @@ +p tw 2 0 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/two-vertices-2.td b/solvers/TCS-Meiji/td-validate-master/test/valid/two-vertices-2.td new file mode 100644 index 0000000..480758f --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/two-vertices-2.td @@ -0,0 +1,2 @@ +s td 1 2 2 +b 1 1 2 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/two-vertices.gr b/solvers/TCS-Meiji/td-validate-master/test/valid/two-vertices.gr new file mode 100644 index 0000000..114d4a9 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/two-vertices.gr @@ -0,0 +1 @@ +p tw 2 0 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/two-vertices.td b/solvers/TCS-Meiji/td-validate-master/test/valid/two-vertices.td new file mode 100644 index 0000000..5edf3c1 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/two-vertices.td @@ -0,0 +1,4 @@ +s td 2 1 2 +b 1 1 +b 2 2 +1 2 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/web1.gr b/solvers/TCS-Meiji/td-validate-master/test/valid/web1.gr new file mode 100644 index 0000000..9d46124 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/web1.gr @@ -0,0 +1,7 @@ +c This file describes a path with five vertices and four edges. +p tw 5 4 +1 2 +2 3 +c we are half-way done with the instance definition. +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/web1.td b/solvers/TCS-Meiji/td-validate-master/test/valid/web1.td new file mode 100644 index 0000000..2d37dd9 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/web1.td @@ -0,0 +1,9 @@ +c This file describes a tree decomposition with 4 bags, width 2, for a graph with 5 vertices +s td 4 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +b 4 +1 2 +2 3 +2 4 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/web2.gr b/solvers/TCS-Meiji/td-validate-master/test/valid/web2.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/web2.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/web2.td b/solvers/TCS-Meiji/td-validate-master/test/valid/web2.td new file mode 100644 index 0000000..89a486b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/web2.td @@ -0,0 +1,8 @@ +s td 4 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +b 4 +1 2 +2 3 +2 4 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/web3.gr b/solvers/TCS-Meiji/td-validate-master/test/valid/web3.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/web3.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/web3.td b/solvers/TCS-Meiji/td-validate-master/test/valid/web3.td new file mode 100644 index 0000000..1ee6e18 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/web3.td @@ -0,0 +1,6 @@ +s td 3 3 5 +b 2 2 3 4 +b 1 1 2 3 +b 3 3 4 5 +1 2 +2 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/web4.gr b/solvers/TCS-Meiji/td-validate-master/test/valid/web4.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/web4.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/web4.td b/solvers/TCS-Meiji/td-validate-master/test/valid/web4.td new file mode 100644 index 0000000..7b3b7f1 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/web4.td @@ -0,0 +1,6 @@ +s td 3 3 5 +b 1 1 2 3 +b 2 2 3 4 +b 3 3 4 5 +1 2 +2 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/wedge.gr b/solvers/TCS-Meiji/td-validate-master/test/valid/wedge.gr new file mode 100644 index 0000000..c28f165 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/wedge.gr @@ -0,0 +1,3 @@ +p tw 3 2 +1 2 +2 3 diff --git a/solvers/TCS-Meiji/td-validate-master/test/valid/wedge.td b/solvers/TCS-Meiji/td-validate-master/test/valid/wedge.td new file mode 100644 index 0000000..4f5134c --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test/valid/wedge.td @@ -0,0 +1,4 @@ +s td 2 2 3 +b 1 1 2 +b 2 2 3 +1 2 diff --git a/solvers/TCS-Meiji/td-validate-master/test_td-validate.sh b/solvers/TCS-Meiji/td-validate-master/test_td-validate.sh new file mode 100644 index 0000000..11cd196 --- /dev/null +++ b/solvers/TCS-Meiji/td-validate-master/test_td-validate.sh @@ -0,0 +1,48 @@ +#!/usr/bin/env bash + +VALIDATE=./td-validate + +NUM_PASSED=0 +NUM_ALL=0 + +do_test() +{ +for grfile in test/$1/*.gr; +do + file="${grfile%%.gr}" + NUM_ALL=$[$NUM_ALL + 1] + if [ -f "$file.td" ] + then + $VALIDATE "$grfile" "$file.td" &> /dev/null; + STATE=$? + INFO="(gr + td)" + else + $VALIDATE "$grfile" &> /dev/null; + STATE=$? + INFO="(gr)" + fi + if [ "0$STATE" -eq "0$2" ] + then + tput setaf 2; + echo "ok " "$file" "$INFO" + NUM_PASSED=$[$NUM_PASSED + 1] + else + tput setaf 1; + echo "FAIL" "$file" "$INFO" + fi +done +} + +do_test valid 0 +echo +do_test invalid 1 +echo +do_test empty 2 + +tput sgr0; + +echo +echo "$NUM_PASSED of $NUM_ALL tests passed." +echo + +test $NUM_PASSED = $NUM_ALL diff --git a/solvers/TCS-Meiji/test_instance/GNP_20_20_0.gr b/solvers/TCS-Meiji/test_instance/GNP_20_20_0.gr new file mode 100644 index 0000000..5bcf70b --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/GNP_20_20_0.gr @@ -0,0 +1,47 @@ +p tw 20 46 +1 13 +1 14 +1 16 +1 19 +1 4 +2 5 +11 15 +11 17 +11 18 +12 14 +12 16 +12 17 +13 18 +15 19 +16 18 +16 19 +17 18 +17 19 +17 20 +18 20 +19 20 +3 19 +3 5 +3 7 +4 13 +4 15 +4 16 +4 18 +4 19 +4 6 +4 9 +5 20 +5 7 +5 9 +6 16 +6 7 +7 15 +7 20 +8 12 +8 13 +8 15 +8 16 +9 18 +9 19 +10 12 +10 17 diff --git a/solvers/TCS-Meiji/test_instance/RandomBipartite_25_50_1.gr b/solvers/TCS-Meiji/test_instance/RandomBipartite_25_50_1.gr new file mode 100644 index 0000000..d362350 --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/RandomBipartite_25_50_1.gr @@ -0,0 +1,115 @@ +p tw 75 114 +1 22 +1 71 +2 32 +2 54 +2 62 +2 65 +3 37 +4 20 +4 33 +4 39 +4 67 +4 70 +5 41 +5 73 +6 28 +6 30 +6 39 +7 19 +7 22 +7 28 +7 43 +7 69 +7 70 +9 22 +9 25 +9 49 +9 63 +9 66 +10 23 +10 38 +10 54 +10 65 +10 70 +11 25 +11 28 +11 33 +12 36 +12 39 +12 42 +12 44 +12 47 +12 64 +12 73 +13 27 +13 50 +13 51 +13 54 +13 70 +13 73 +14 41 +14 60 +14 65 +14 72 +14 73 +15 27 +15 32 +15 41 +15 42 +15 52 +15 58 +15 59 +15 62 +15 66 +15 67 +16 19 +16 37 +16 51 +16 54 +16 66 +17 39 +17 58 +17 61 +18 45 +18 70 +24 23 +35 29 +35 39 +35 42 +35 45 +35 47 +35 54 +35 59 +35 65 +35 72 +46 30 +46 31 +46 32 +46 48 +46 50 +46 61 +46 67 +57 25 +57 32 +57 37 +57 39 +57 40 +57 50 +57 54 +57 60 +57 71 +68 41 +68 42 +68 64 +74 21 +74 37 +74 38 +74 39 +74 61 +74 73 +75 25 +75 38 +75 41 +75 42 +75 63 diff --git a/solvers/TCS-Meiji/test_instance/RandomGNM_100_100.gr b/solvers/TCS-Meiji/test_instance/RandomGNM_100_100.gr new file mode 100644 index 0000000..65f5f15 --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/RandomGNM_100_100.gr @@ -0,0 +1,101 @@ +p tw 100 100 +1 8 +1 78 +4 17 +4 31 +4 64 +4 92 +6 47 +7 12 +7 62 +7 64 +8 32 +8 78 +8 82 +9 12 +9 17 +9 49 +10 88 +12 50 +12 54 +13 21 +13 38 +13 93 +15 21 +15 27 +15 62 +16 70 +17 45 +17 63 +17 69 +17 99 +18 42 +18 51 +18 55 +18 58 +18 85 +19 25 +19 33 +19 77 +20 81 +20 89 +22 27 +22 28 +22 82 +22 95 +23 40 +24 86 +26 91 +26 94 +27 55 +27 99 +28 54 +31 80 +34 51 +34 67 +37 56 +37 73 +37 87 +37 94 +38 61 +39 80 +40 73 +40 74 +45 94 +46 40 +46 99 +48 67 +48 92 +49 63 +52 100 +55 85 +56 61 +58 93 +59 63 +59 87 +60 67 +61 92 +62 88 +62 97 +63 66 +63 97 +66 81 +66 100 +67 91 +68 5 +68 15 +68 40 +68 43 +68 95 +72 75 +77 78 +78 83 +79 51 +79 59 +79 95 +83 84 +83 85 +87 88 +90 3 +90 15 +90 69 diff --git a/solvers/TCS-Meiji/test_instance/Toroidal6RegularGrid2dGraph_4_6.gr b/solvers/TCS-Meiji/test_instance/Toroidal6RegularGrid2dGraph_4_6.gr new file mode 100644 index 0000000..13e962a --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/Toroidal6RegularGrid2dGraph_4_6.gr @@ -0,0 +1,73 @@ +p tw 24 72 +1 2 +1 11 +1 17 +1 20 +1 21 +1 22 +2 11 +2 12 +2 13 +2 22 +2 23 +3 4 +3 9 +3 10 +4 5 +4 10 +5 6 +5 10 +5 11 +5 12 +6 7 +6 12 +6 14 +7 8 +7 14 +7 15 +8 9 +8 15 +8 16 +9 10 +9 16 +9 17 +10 11 +10 17 +11 12 +11 17 +12 14 +13 12 +13 14 +13 18 +13 23 +13 24 +14 15 +15 16 +16 17 +18 3 +18 14 +18 15 +18 19 +18 24 +19 3 +19 4 +19 15 +19 16 +19 20 +20 4 +20 16 +20 17 +20 21 +21 4 +21 5 +21 6 +21 22 +22 6 +22 7 +22 23 +23 7 +23 8 +23 24 +24 3 +24 8 +24 9 diff --git a/solvers/TCS-Meiji/test_instance/dimacs_anna.gr b/solvers/TCS-Meiji/test_instance/dimacs_anna.gr new file mode 100644 index 0000000..19f3f69 --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/dimacs_anna.gr @@ -0,0 +1,261 @@ +p tw 138 260 +1 7 +2 76 +2 72 +51 110 +51 113 +51 111 +36 112 +36 107 +36 83 +56 119 +56 7 +78 107 +79 90 +79 113 +79 83 +82 111 +83 104 +83 112 +83 100 +83 110 +83 111 +83 113 +83 107 +86 110 +87 98 +87 113 +88 110 +89 133 +89 41 +90 113 +98 113 +99 113 +100 107 +101 110 +102 119 +102 121 +102 7 +103 113 +104 107 +108 113 +110 111 +110 113 +111 113 +115 119 +116 123 +116 7 +116 4 +116 119 +118 123 +119 11 +119 20 +119 132 +119 28 +119 137 +119 15 +119 43 +119 130 +119 4 +119 120 +119 18 +119 38 +119 33 +119 42 +119 13 +119 6 +119 44 +119 126 +119 30 +119 27 +119 124 +119 32 +119 134 +119 123 +119 9 +119 19 +119 121 +119 7 +122 19 +123 20 +123 4 +123 124 +123 136 +123 134 +123 7 +124 18 +124 7 +125 31 +130 7 +130 44 +131 136 +132 7 +134 136 +138 35 +4 31 +4 44 +4 7 +5 41 +6 44 +7 29 +7 8 +7 23 +7 28 +7 44 +7 38 +7 19 +7 21 +13 42 +13 44 +19 44 +19 41 +31 44 +31 35 +42 44 +62 72 +62 110 +62 45 +84 48 +84 14 +95 45 +95 103 +95 113 +106 60 +117 119 +117 83 +117 45 +117 77 +117 54 +117 110 +117 67 +117 111 +117 113 +117 87 +117 7 +128 72 +128 83 +128 25 +128 97 +128 48 +128 14 +14 109 +14 58 +14 68 +14 50 +14 90 +14 85 +14 104 +14 36 +14 92 +14 79 +14 25 +14 97 +14 78 +14 48 +14 107 +14 83 +14 21 +14 45 +14 59 +14 64 +14 113 +14 70 +14 72 +25 97 +25 48 +45 57 +45 103 +45 99 +45 83 +45 77 +45 54 +45 67 +45 111 +45 88 +45 86 +45 93 +45 69 +45 49 +45 59 +45 64 +45 113 +45 110 +45 72 +45 46 +45 9 +47 77 +47 72 +47 85 +48 97 +48 83 +48 72 +49 64 +49 77 +49 111 +49 113 +49 110 +49 93 +49 69 +50 72 +52 105 +52 119 +54 113 +54 83 +54 77 +54 110 +54 67 +54 111 +55 72 +57 64 +57 113 +57 110 +57 111 +58 109 +59 110 +59 111 +59 108 +59 98 +59 87 +59 113 +59 64 +64 110 +64 111 +64 98 +64 113 +65 110 +65 111 +65 119 +65 113 +67 113 +67 83 +67 77 +67 110 +67 111 +69 93 +70 111 +70 82 +70 75 +70 72 +71 85 +71 72 +72 77 +72 111 +72 113 +72 90 +72 82 +72 92 +72 75 +72 76 +72 94 +72 85 +72 133 +72 83 +72 21 +72 110 +75 82 +76 94 +77 85 +77 113 +77 83 +77 110 +77 111 diff --git a/solvers/TCS-Meiji/test_instance/empty.gr b/solvers/TCS-Meiji/test_instance/empty.gr new file mode 100644 index 0000000..a72e348 --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/empty.gr @@ -0,0 +1 @@ +p tw 0 0 diff --git a/solvers/TCS-Meiji/test_instance/ex001.gr b/solvers/TCS-Meiji/test_instance/ex001.gr new file mode 100644 index 0000000..15cbac1 --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/ex001.gr @@ -0,0 +1,649 @@ +p tw 262 648 +1 35 +35 244 +35 234 +35 152 +35 183 +1 244 +241 244 +219 244 +105 244 +82 126 +126 241 +67 126 +82 241 +67 82 +82 136 +82 83 +219 241 +105 241 +67 241 +69 241 +105 219 +219 251 +219 249 +190 219 +219 233 +105 251 +251 252 +212 251 +154 251 +67 69 +69 211 +69 252 +7 69 +83 136 +83 250 +83 211 +83 253 +109 140 +140 250 +140 248 +109 250 +109 248 +36 109 +211 250 +250 253 +248 250 +121 250 +211 252 +7 211 +211 253 +97 211 +212 252 +154 252 +7 252 +167 252 +154 212 +2 212 +201 212 +18 212 +89 212 +2 154 +2 246 +2 16 +2 40 +7 167 +142 167 +167 246 +167 179 +97 253 +57 97 +97 142 +45 97 +121 248 +49 121 +57 121 +104 121 +9 36 +36 49 +32 36 +9 49 +9 32 +49 57 +49 104 +32 49 +49 220 +57 142 +45 57 +57 104 +57 187 +142 246 +142 179 +45 142 +142 177 +16 246 +40 246 +179 246 +246 255 +16 40 +16 227 +16 30 +16 95 +16 168 +40 227 +227 230 +42 227 +72 227 +179 255 +4 255 +230 255 +141 255 +45 177 +102 177 +4 177 +177 260 +104 187 +153 187 +102 187 +23 187 +32 220 +153 220 +102 153 +23 153 +4 102 +102 260 +23 102 +102 217 +4 230 +4 141 +4 260 +4 65 +42 230 +72 230 +141 230 +24 230 +42 72 +42 236 +42 191 +42 159 +27 42 +72 236 +130 236 +145 236 +143 236 +24 141 +24 156 +24 130 +24 173 +65 260 +65 103 +65 156 +65 180 +23 217 +103 217 +103 156 +103 180 +130 156 +156 173 +156 180 +54 156 +130 145 +130 143 +130 173 +130 171 +143 145 +145 160 +39 145 +60 145 +87 145 +143 160 +71 160 +160 193 +158 160 +171 173 +63 171 +71 171 +171 228 +54 180 +54 63 +63 71 +63 228 +71 193 +71 158 +71 228 +71 214 +158 193 +193 221 +169 193 +107 193 +166 193 +158 221 +214 228 +152 234 +183 234 +152 183 +152 249 +152 157 +114 152 +152 199 +183 249 +190 249 +233 249 +190 233 +190 201 +190 224 +29 190 +178 190 +201 233 +18 201 +89 201 +18 89 +18 30 +18 151 +18 218 +18 118 +30 89 +30 95 +30 168 +95 168 +95 191 +17 95 +76 95 +59 95 +168 191 +159 191 +27 191 +27 159 +39 159 +132 159 +135 159 +159 184 +27 39 +39 60 +39 87 +60 87 +60 169 +60 194 +60 182 +60 245 +87 169 +107 169 +166 169 +107 166 +107 239 +107 127 +107 115 +100 107 +166 239 +88 165 +88 157 +88 147 +88 149 +88 196 +157 165 +114 157 +157 199 +114 199 +114 224 +114 134 +114 229 +114 216 +199 224 +29 224 +178 224 +29 178 +29 151 +29 129 +29 56 +29 210 +151 178 +151 218 +118 151 +118 218 +17 218 +218 259 +150 218 +19 218 +17 118 +17 76 +17 59 +59 76 +76 132 +76 213 +76 139 +76 93 +59 132 +132 135 +132 184 +135 184 +135 194 +92 135 +70 135 +8 135 +184 194 +182 194 +194 245 +182 245 +127 182 +182 209 +98 182 +78 182 +127 245 +115 127 +100 127 +100 115 +62 115 +74 115 +22 115 +115 204 +62 100 +147 149 +147 196 +149 196 +134 149 +120 149 +41 149 +149 203 +134 196 +134 229 +134 216 +216 229 +129 229 +91 229 +43 229 +66 229 +129 216 +56 129 +129 210 +56 210 +56 259 +56 164 +56 81 +48 56 +210 259 +150 259 +19 259 +19 150 +150 213 +26 150 +150 162 +68 150 +19 213 +139 213 +93 213 +93 139 +92 139 +64 139 +44 139 +139 242 +92 93 +70 92 +8 92 +8 70 +70 209 +70 85 +31 70 +70 223 +8 209 +98 209 +78 209 +78 98 +74 98 +98 195 +53 98 +98 231 +74 78 +22 74 +74 204 +22 204 +22 128 +22 240 +22 124 +22 25 +128 204 +41 120 +120 203 +41 203 +41 91 +41 122 +41 84 +41 176 +91 203 +43 91 +66 91 +43 66 +43 164 +43 235 +43 47 +43 80 +66 164 +81 164 +48 164 +48 81 +26 81 +38 81 +81 96 +81 188 +26 48 +26 162 +26 68 +68 162 +64 162 +15 162 +75 162 +162 202 +64 68 +44 64 +64 242 +44 242 +44 85 +20 44 +44 73 +44 256 +85 242 +31 85 +85 223 +31 223 +31 195 +31 46 +3 31 +31 55 +195 223 +53 195 +195 231 +53 231 +53 240 +12 53 +53 161 +53 206 +231 240 +124 240 +25 240 +25 124 +124 237 +84 122 +122 176 +84 176 +84 235 +84 238 +84 181 +84 226 +176 235 +47 235 +80 235 +47 80 +38 47 +47 215 +47 77 +13 47 +38 80 +38 96 +38 188 +96 188 +15 96 +10 96 +96 138 +37 96 +15 188 +15 75 +15 202 +75 202 +20 75 +75 174 +75 123 +75 106 +20 202 +20 73 +20 256 +73 256 +46 73 +73 108 +73 197 +73 133 +46 256 +3 46 +46 55 +3 55 +3 12 +3 200 +3 52 +3 205 +12 55 +12 161 +12 206 +161 206 +161 237 +161 192 +161 172 +137 161 +206 237 +181 238 +226 238 +181 226 +181 215 +181 261 +33 181 +101 181 +215 226 +77 215 +13 215 +13 77 +10 77 +34 77 +77 262 +77 90 +10 13 +10 138 +10 37 +37 138 +138 174 +138 243 +6 138 +138 257 +37 174 +123 174 +106 174 +106 123 +108 123 +110 123 +111 123 +113 123 +106 108 +108 197 +108 133 +133 197 +197 200 +116 197 +94 197 +189 197 +133 200 +52 200 +200 205 +52 205 +52 192 +52 186 +52 144 +11 52 +192 205 +172 192 +137 192 +137 172 +5 172 +33 261 +101 261 +33 101 +33 34 +34 101 +34 262 +34 90 +90 262 +243 262 +131 262 +198 262 +21 262 +90 243 +6 243 +243 257 +6 257 +6 110 +6 99 +6 50 +6 175 +110 257 +110 111 +110 113 +111 113 +111 116 +111 207 +111 185 +111 112 +113 116 +94 116 +116 189 +94 189 +94 186 +94 148 +94 125 +94 119 +186 189 +144 186 +11 186 +11 144 +5 144 +86 144 +28 144 +51 144 +5 11 +131 198 +21 131 +21 198 +99 198 +21 99 +50 99 +99 175 +50 175 +50 207 +50 232 +50 225 +50 155 +175 207 +185 207 +112 207 +112 185 +148 185 +185 254 +146 185 +117 185 +112 148 +125 148 +119 148 +119 125 +86 125 +14 125 +125 170 +125 258 +86 119 +28 86 +51 86 +28 51 +28 247 +225 232 +155 232 +155 225 +225 254 +155 254 +146 254 +117 254 +117 146 +14 146 +146 163 +58 146 +146 222 +14 117 +14 170 +14 258 +170 258 +170 247 +79 170 +170 208 +61 170 +247 258 +58 163 +163 222 +58 222 +58 79 +79 222 +79 208 +61 79 +61 208 diff --git a/solvers/TCS-Meiji/test_instance/ex003.gr b/solvers/TCS-Meiji/test_instance/ex003.gr new file mode 100644 index 0000000..649942b --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/ex003.gr @@ -0,0 +1,2114 @@ +p tw 92 2113 +57 61 +57 62 +46 57 +57 87 +55 57 +33 57 +57 71 +38 57 +14 57 +9 57 +11 57 +57 63 +57 66 +7 57 +2 57 +54 57 +57 76 +57 78 +1 57 +57 90 +20 57 +35 57 +16 57 +57 59 +17 57 +24 57 +57 82 +44 57 +57 70 +28 57 +57 83 +57 77 +57 75 +57 84 +57 69 +57 79 +57 60 +4 57 +36 57 +57 73 +5 61 +61 62 +46 61 +61 87 +55 61 +33 61 +61 71 +38 61 +14 61 +9 61 +11 61 +61 63 +61 66 +7 61 +2 61 +54 61 +61 76 +61 78 +1 61 +61 90 +20 61 +35 61 +16 61 +59 61 +17 61 +24 61 +61 82 +44 61 +61 67 +61 70 +25 61 +28 61 +21 61 +61 83 +61 77 +61 75 +61 84 +61 69 +61 79 +60 61 +4 61 +36 61 +61 73 +5 41 +5 48 +5 39 +5 29 +5 26 +5 12 +5 91 +5 37 +5 23 +5 11 +5 92 +5 66 +5 6 +2 5 +5 40 +5 76 +5 43 +1 5 +5 30 +5 20 +5 32 +5 16 +5 18 +5 17 +5 65 +5 82 +5 22 +5 70 +5 56 +5 50 +5 28 +5 52 +5 83 +5 8 +5 77 +5 81 +5 31 +5 64 +5 74 +5 89 +5 88 +5 27 +5 47 +41 48 +39 41 +29 41 +26 41 +12 41 +41 91 +37 41 +23 41 +11 41 +41 92 +41 66 +6 41 +2 41 +40 41 +41 76 +41 43 +1 41 +30 41 +20 41 +32 41 +16 41 +18 41 +17 41 +41 65 +41 82 +22 41 +41 70 +41 50 +28 41 +41 52 +41 83 +8 41 +41 77 +41 81 +31 41 +41 64 +41 74 +41 89 +41 88 +27 41 +41 47 +58 86 +62 86 +46 86 +86 87 +55 86 +33 86 +71 86 +38 86 +14 86 +9 86 +11 86 +63 86 +66 86 +7 86 +2 86 +54 86 +76 86 +78 86 +1 86 +86 90 +20 86 +35 86 +16 86 +59 86 +17 86 +24 86 +82 86 +44 86 +70 86 +28 86 +83 86 +77 86 +75 86 +84 86 +69 86 +79 86 +60 86 +4 86 +36 86 +73 86 +58 85 +58 62 +46 58 +58 87 +55 58 +33 58 +58 71 +38 58 +14 58 +9 58 +11 58 +58 63 +58 66 +7 58 +2 58 +54 58 +58 76 +58 78 +1 58 +58 90 +20 58 +35 58 +16 58 +58 59 +17 58 +24 58 +58 82 +44 58 +58 67 +58 70 +25 58 +28 58 +21 58 +58 83 +58 77 +58 75 +58 84 +58 69 +58 79 +58 60 +4 58 +36 58 +58 73 +34 85 +48 85 +39 85 +29 85 +26 85 +12 85 +85 91 +37 85 +23 85 +11 85 +85 92 +66 85 +6 85 +2 85 +40 85 +76 85 +43 85 +1 85 +30 85 +20 85 +32 85 +16 85 +18 85 +17 85 +65 85 +82 85 +22 85 +70 85 +56 85 +50 85 +28 85 +52 85 +83 85 +8 85 +77 85 +81 85 +31 85 +64 85 +74 85 +85 89 +85 88 +27 85 +47 85 +34 48 +34 39 +29 34 +26 34 +12 34 +34 91 +34 37 +23 34 +11 34 +34 92 +34 66 +6 34 +2 34 +34 40 +34 76 +34 43 +1 34 +30 34 +20 34 +32 34 +16 34 +18 34 +17 34 +34 65 +34 82 +22 34 +34 70 +34 50 +28 34 +34 52 +34 83 +8 34 +34 77 +34 81 +31 34 +34 64 +34 74 +34 89 +34 88 +27 34 +34 47 +13 72 +62 72 +46 72 +72 87 +55 72 +33 72 +71 72 +38 72 +14 72 +9 72 +11 72 +63 72 +66 72 +7 72 +2 72 +54 72 +72 76 +72 78 +1 72 +72 90 +20 72 +35 72 +16 72 +59 72 +17 72 +24 72 +72 82 +44 72 +70 72 +28 72 +72 83 +72 77 +72 75 +72 84 +69 72 +72 79 +60 72 +4 72 +36 72 +72 73 +13 15 +13 62 +13 46 +13 87 +13 55 +13 33 +13 71 +13 38 +13 14 +9 13 +11 13 +13 63 +13 66 +7 13 +2 13 +13 54 +13 76 +13 78 +1 13 +13 90 +13 20 +13 35 +13 16 +13 59 +13 17 +13 24 +13 82 +13 44 +13 67 +13 70 +13 25 +13 28 +13 21 +13 83 +13 77 +13 75 +13 84 +13 69 +13 79 +13 60 +4 13 +13 36 +13 73 +15 53 +15 48 +15 39 +15 29 +15 26 +12 15 +15 91 +15 37 +15 23 +11 15 +15 92 +15 66 +6 15 +2 15 +15 40 +15 76 +15 43 +1 15 +15 30 +15 20 +15 32 +15 16 +15 18 +15 17 +15 65 +15 82 +15 22 +15 70 +15 56 +15 50 +15 28 +15 52 +15 83 +8 15 +15 77 +15 81 +15 31 +15 64 +15 74 +15 89 +15 88 +15 27 +15 47 +48 53 +39 53 +29 53 +26 53 +12 53 +53 91 +37 53 +23 53 +11 53 +53 92 +53 66 +6 53 +2 53 +40 53 +53 76 +43 53 +1 53 +30 53 +20 53 +32 53 +16 53 +18 53 +17 53 +53 65 +53 82 +22 53 +53 70 +50 53 +28 53 +52 53 +53 83 +8 53 +53 77 +53 81 +31 53 +53 64 +53 74 +53 89 +53 88 +27 53 +47 53 +10 42 +42 62 +42 46 +42 87 +42 55 +33 42 +42 71 +38 42 +14 42 +9 42 +11 42 +42 63 +42 66 +7 42 +2 42 +42 54 +42 76 +42 78 +1 42 +42 90 +20 42 +35 42 +16 42 +42 59 +17 42 +24 42 +42 82 +42 44 +42 70 +28 42 +42 83 +42 77 +42 75 +42 84 +42 69 +42 79 +42 60 +4 42 +36 42 +42 73 +10 19 +10 62 +10 46 +10 87 +10 55 +10 33 +10 71 +10 38 +10 14 +9 10 +10 11 +10 63 +10 66 +7 10 +2 10 +10 54 +10 76 +10 78 +1 10 +10 90 +10 20 +10 35 +10 16 +10 59 +10 17 +10 24 +10 82 +10 44 +10 67 +10 70 +10 25 +10 28 +10 21 +10 83 +10 77 +10 75 +10 84 +10 69 +10 79 +10 60 +4 10 +10 36 +10 73 +19 49 +19 48 +19 39 +19 29 +19 26 +12 19 +19 91 +19 37 +19 23 +11 19 +19 92 +19 66 +6 19 +2 19 +19 40 +19 76 +19 43 +1 19 +19 30 +19 20 +19 32 +16 19 +18 19 +17 19 +19 65 +19 82 +19 22 +19 70 +19 56 +19 50 +19 28 +19 52 +19 83 +8 19 +19 77 +19 81 +19 31 +19 64 +19 74 +19 89 +19 88 +19 27 +19 47 +48 49 +39 49 +29 49 +26 49 +12 49 +49 91 +37 49 +23 49 +11 49 +49 92 +49 66 +6 49 +2 49 +40 49 +49 76 +43 49 +1 49 +30 49 +20 49 +32 49 +16 49 +18 49 +17 49 +49 65 +49 82 +22 49 +49 70 +49 50 +28 49 +49 52 +49 83 +8 49 +49 77 +49 81 +31 49 +49 64 +49 74 +49 89 +49 88 +27 49 +47 49 +46 62 +9 62 +11 62 +62 63 +62 66 +7 62 +2 62 +54 62 +62 76 +62 78 +1 62 +62 90 +20 62 +35 62 +16 62 +59 62 +17 62 +24 62 +62 82 +44 62 +62 70 +28 62 +62 83 +62 77 +62 75 +62 84 +62 69 +62 79 +60 62 +4 62 +36 62 +62 73 +46 48 +9 46 +11 46 +46 63 +46 66 +7 46 +2 46 +46 54 +46 76 +46 78 +1 46 +46 90 +20 46 +35 46 +16 46 +46 59 +17 46 +24 46 +46 82 +44 46 +46 67 +46 70 +25 46 +28 46 +21 46 +46 83 +46 77 +46 75 +46 84 +46 69 +46 79 +46 60 +4 46 +36 46 +46 73 +39 48 +11 48 +48 92 +48 66 +6 48 +2 48 +40 48 +48 76 +43 48 +1 48 +30 48 +20 48 +32 48 +16 48 +18 48 +17 48 +48 65 +48 82 +22 48 +48 70 +48 56 +48 50 +28 48 +48 52 +48 83 +8 48 +48 77 +48 81 +31 48 +48 64 +48 74 +48 89 +48 88 +27 48 +47 48 +11 39 +39 92 +39 66 +6 39 +2 39 +39 40 +39 76 +39 43 +1 39 +30 39 +20 39 +32 39 +16 39 +18 39 +17 39 +39 65 +39 82 +22 39 +39 70 +39 50 +28 39 +39 52 +39 83 +8 39 +39 77 +39 81 +31 39 +39 64 +39 74 +39 89 +39 88 +27 39 +39 47 +55 87 +9 87 +11 87 +63 87 +66 87 +7 87 +2 87 +54 87 +76 87 +78 87 +1 87 +87 90 +20 87 +35 87 +16 87 +59 87 +17 87 +24 87 +82 87 +44 87 +70 87 +28 87 +83 87 +77 87 +75 87 +84 87 +69 87 +79 87 +60 87 +4 87 +36 87 +73 87 +29 55 +9 55 +11 55 +55 63 +55 66 +7 55 +2 55 +54 55 +55 76 +55 78 +1 55 +55 90 +20 55 +35 55 +16 55 +55 59 +17 55 +24 55 +55 82 +44 55 +55 67 +55 70 +25 55 +28 55 +21 55 +55 83 +55 77 +55 75 +55 84 +55 69 +55 79 +55 60 +4 55 +36 55 +55 73 +26 29 +11 29 +29 92 +29 66 +6 29 +2 29 +29 40 +29 76 +29 43 +1 29 +29 30 +20 29 +29 32 +16 29 +18 29 +17 29 +29 65 +29 82 +22 29 +29 70 +29 56 +29 50 +28 29 +29 52 +29 83 +8 29 +29 77 +29 81 +29 31 +29 64 +29 74 +29 89 +29 88 +27 29 +29 47 +11 26 +26 92 +26 66 +6 26 +2 26 +26 40 +26 76 +26 43 +1 26 +26 30 +20 26 +26 32 +16 26 +18 26 +17 26 +26 65 +26 82 +22 26 +26 70 +26 50 +26 28 +26 52 +26 83 +8 26 +26 77 +26 81 +26 31 +26 64 +26 74 +26 89 +26 88 +26 27 +26 47 +33 71 +9 33 +11 33 +33 63 +33 66 +7 33 +2 33 +33 54 +33 76 +33 78 +1 33 +33 90 +20 33 +33 35 +16 33 +33 59 +17 33 +24 33 +33 82 +33 44 +33 70 +28 33 +33 83 +33 77 +33 75 +33 84 +33 69 +33 79 +33 60 +4 33 +33 36 +33 73 +12 71 +9 71 +11 71 +63 71 +66 71 +7 71 +2 71 +54 71 +71 76 +71 78 +1 71 +71 90 +20 71 +35 71 +16 71 +59 71 +17 71 +24 71 +71 82 +44 71 +67 71 +70 71 +25 71 +28 71 +21 71 +71 83 +71 77 +71 75 +71 84 +69 71 +71 79 +60 71 +4 71 +36 71 +71 73 +12 91 +11 12 +12 92 +12 66 +6 12 +2 12 +12 40 +12 76 +12 43 +1 12 +12 30 +12 20 +12 32 +12 16 +12 18 +12 17 +12 65 +12 82 +12 22 +12 70 +12 56 +12 50 +12 28 +12 52 +12 83 +8 12 +12 77 +12 81 +12 31 +12 64 +12 74 +12 89 +12 88 +12 27 +12 47 +11 91 +91 92 +66 91 +6 91 +2 91 +40 91 +76 91 +43 91 +1 91 +30 91 +20 91 +32 91 +16 91 +18 91 +17 91 +65 91 +82 91 +22 91 +70 91 +50 91 +28 91 +52 91 +83 91 +8 91 +77 91 +81 91 +31 91 +64 91 +74 91 +89 91 +88 91 +27 91 +47 91 +14 38 +9 38 +11 38 +38 63 +38 66 +7 38 +2 38 +38 54 +38 76 +38 78 +1 38 +38 90 +20 38 +35 38 +16 38 +38 59 +17 38 +24 38 +38 82 +38 44 +38 70 +28 38 +38 83 +38 77 +38 75 +38 84 +38 69 +38 79 +38 60 +4 38 +36 38 +38 73 +14 37 +9 14 +11 14 +14 63 +14 66 +7 14 +2 14 +14 54 +14 76 +14 78 +1 14 +14 90 +14 20 +14 35 +14 16 +14 59 +14 17 +14 24 +14 82 +14 44 +14 67 +14 70 +14 25 +14 28 +14 21 +14 83 +14 77 +14 75 +14 84 +14 69 +14 79 +14 60 +4 14 +14 36 +14 73 +23 37 +11 37 +37 92 +37 66 +6 37 +2 37 +37 40 +37 76 +37 43 +1 37 +30 37 +20 37 +32 37 +16 37 +18 37 +17 37 +37 65 +37 82 +22 37 +37 70 +37 56 +37 50 +28 37 +37 52 +37 83 +8 37 +37 77 +37 81 +31 37 +37 64 +37 74 +37 89 +37 88 +27 37 +37 47 +11 23 +23 92 +23 66 +6 23 +2 23 +23 40 +23 76 +23 43 +1 23 +23 30 +20 23 +23 32 +16 23 +18 23 +17 23 +23 65 +23 82 +22 23 +23 70 +23 50 +23 28 +23 52 +23 83 +8 23 +23 77 +23 81 +23 31 +23 64 +23 74 +23 89 +23 88 +23 27 +23 47 +9 11 +7 9 +2 9 +9 54 +9 76 +9 78 +1 9 +9 90 +9 20 +9 35 +9 16 +9 59 +9 17 +9 24 +9 82 +9 44 +9 70 +9 28 +9 83 +9 77 +9 75 +9 84 +9 69 +9 79 +9 60 +4 9 +9 36 +9 73 +11 92 +2 11 +11 40 +11 76 +11 43 +1 11 +11 30 +11 90 +11 20 +11 32 +11 35 +11 16 +11 18 +11 59 +11 17 +11 65 +11 24 +11 82 +11 22 +11 44 +11 70 +11 50 +11 28 +11 52 +11 83 +8 11 +11 77 +11 75 +11 84 +11 81 +11 31 +11 69 +11 79 +11 64 +11 74 +11 60 +4 11 +11 89 +11 88 +11 36 +11 73 +11 27 +11 47 +40 92 +43 92 +30 92 +20 92 +32 92 +16 92 +18 92 +17 92 +65 92 +82 92 +22 92 +70 92 +50 92 +28 92 +52 92 +83 92 +8 92 +77 92 +81 92 +31 92 +64 92 +74 92 +89 92 +88 92 +27 92 +47 92 +63 66 +7 63 +2 63 +54 63 +63 76 +63 78 +1 63 +63 90 +20 63 +35 63 +16 63 +59 63 +17 63 +24 63 +63 82 +44 63 +63 70 +28 63 +63 83 +63 77 +63 75 +63 84 +63 69 +63 79 +60 63 +4 63 +36 63 +63 73 +6 66 +2 66 +40 66 +66 76 +43 66 +1 66 +30 66 +66 90 +20 66 +32 66 +35 66 +16 66 +18 66 +59 66 +17 66 +65 66 +24 66 +66 82 +22 66 +44 66 +66 70 +50 66 +28 66 +52 66 +66 83 +8 66 +66 77 +66 75 +66 84 +66 81 +31 66 +66 69 +66 79 +64 66 +66 74 +60 66 +4 66 +66 89 +66 88 +36 66 +66 73 +27 66 +47 66 +6 40 +6 43 +6 30 +6 20 +6 32 +6 16 +6 18 +6 17 +6 65 +6 82 +6 22 +6 70 +6 50 +6 28 +6 52 +6 83 +6 8 +6 77 +6 81 +6 31 +6 64 +6 74 +6 89 +6 88 +6 27 +6 47 +2 7 +7 78 +1 7 +7 90 +7 20 +7 35 +7 16 +7 59 +7 17 +7 24 +7 82 +7 44 +7 70 +7 28 +7 83 +7 77 +7 75 +7 84 +7 69 +7 79 +7 60 +4 7 +7 36 +7 73 +2 40 +2 78 +1 2 +2 30 +2 90 +2 20 +2 32 +2 35 +2 16 +2 18 +2 59 +2 17 +2 65 +2 24 +2 82 +2 22 +2 44 +2 70 +2 50 +2 28 +2 52 +2 83 +2 8 +2 77 +2 75 +2 84 +2 81 +2 31 +2 69 +2 79 +2 64 +2 74 +2 60 +2 4 +2 89 +2 88 +2 36 +2 73 +2 27 +2 47 +1 40 +30 40 +20 40 +32 40 +16 40 +18 40 +17 40 +40 65 +40 82 +22 40 +40 70 +40 50 +28 40 +40 52 +40 83 +8 40 +40 77 +40 81 +31 40 +40 64 +40 74 +40 89 +40 88 +27 40 +40 47 +54 76 +54 78 +1 54 +54 90 +20 54 +35 54 +16 54 +54 59 +17 54 +24 54 +54 82 +44 54 +54 70 +28 54 +54 83 +54 77 +54 75 +54 84 +54 69 +54 79 +54 60 +4 54 +36 54 +54 73 +43 76 +76 78 +1 76 +30 76 +76 90 +20 76 +32 76 +35 76 +16 76 +18 76 +59 76 +17 76 +65 76 +24 76 +76 82 +22 76 +44 76 +70 76 +50 76 +28 76 +52 76 +76 83 +8 76 +76 77 +75 76 +76 84 +76 81 +31 76 +69 76 +76 79 +64 76 +74 76 +60 76 +4 76 +76 89 +76 88 +36 76 +73 76 +27 76 +47 76 +1 43 +30 43 +20 43 +32 43 +16 43 +18 43 +17 43 +43 65 +43 82 +22 43 +43 70 +43 50 +28 43 +43 52 +43 83 +8 43 +43 77 +43 81 +31 43 +43 64 +43 74 +43 89 +43 88 +27 43 +43 47 +1 78 +78 90 +20 78 +35 78 +16 78 +59 78 +17 78 +24 78 +78 82 +44 78 +70 78 +28 78 +78 83 +77 78 +75 78 +78 84 +69 78 +78 79 +60 78 +4 78 +36 78 +73 78 +1 30 +1 90 +1 20 +1 32 +1 35 +1 16 +1 18 +1 59 +1 17 +1 65 +1 24 +1 82 +1 22 +1 44 +1 70 +1 50 +1 28 +1 52 +1 83 +1 8 +1 77 +1 75 +1 84 +1 81 +1 31 +1 69 +1 79 +1 64 +1 74 +1 60 +1 4 +1 89 +1 88 +1 36 +1 73 +1 27 +1 47 +20 30 +30 32 +16 30 +18 30 +17 30 +30 65 +30 82 +22 30 +30 70 +30 50 +28 30 +30 52 +30 83 +8 30 +30 77 +30 81 +30 31 +30 64 +30 74 +30 89 +30 88 +27 30 +30 47 +20 90 +59 90 +17 90 +24 90 +82 90 +44 90 +70 90 +28 90 +83 90 +77 90 +75 90 +84 90 +69 90 +79 90 +60 90 +4 90 +36 90 +73 90 +20 32 +20 59 +17 20 +20 65 +20 24 +20 82 +20 22 +20 44 +20 70 +20 50 +20 28 +20 52 +20 83 +8 20 +20 77 +20 75 +20 84 +20 81 +20 31 +20 69 +20 79 +20 64 +20 74 +20 60 +4 20 +20 89 +20 88 +20 36 +20 73 +20 27 +20 47 +17 32 +32 65 +32 82 +22 32 +32 70 +32 50 +28 32 +32 52 +32 83 +8 32 +32 77 +32 81 +31 32 +32 64 +32 74 +32 89 +32 88 +27 32 +32 47 +16 35 +35 59 +17 35 +24 35 +35 82 +35 44 +35 70 +28 35 +35 83 +35 77 +35 75 +35 84 +35 69 +35 79 +35 60 +4 35 +35 36 +35 73 +16 18 +16 59 +16 17 +16 65 +16 24 +16 82 +16 22 +16 44 +16 70 +16 50 +16 28 +16 52 +16 83 +8 16 +16 77 +16 75 +16 84 +16 81 +16 31 +16 69 +16 79 +16 64 +16 74 +16 60 +4 16 +16 89 +16 88 +16 36 +16 73 +16 27 +16 47 +17 18 +18 65 +18 82 +18 22 +18 70 +18 50 +18 28 +18 52 +18 83 +8 18 +18 77 +18 81 +18 31 +18 64 +18 74 +18 89 +18 88 +18 27 +18 47 +17 59 +24 59 +59 82 +44 59 +59 70 +28 59 +59 83 +59 77 +59 75 +59 84 +59 69 +59 79 +59 60 +4 59 +36 59 +59 73 +17 65 +17 24 +17 82 +17 22 +17 44 +17 70 +17 50 +17 28 +17 52 +17 83 +8 17 +17 77 +17 75 +17 84 +17 81 +17 31 +17 69 +17 79 +17 64 +17 74 +17 60 +4 17 +17 89 +17 88 +17 36 +17 73 +17 27 +17 47 +65 82 +22 65 +65 70 +50 65 +28 65 +52 65 +65 83 +8 65 +65 77 +65 81 +31 65 +64 65 +65 74 +65 89 +65 88 +27 65 +47 65 +24 82 +24 44 +24 70 +24 28 +24 83 +24 77 +24 75 +24 84 +24 69 +24 79 +24 60 +4 24 +24 36 +24 73 +22 82 +44 82 +70 82 +50 82 +28 82 +52 82 +82 83 +8 82 +77 82 +75 82 +82 84 +81 82 +31 82 +69 82 +79 82 +64 82 +74 82 +60 82 +4 82 +82 89 +82 88 +36 82 +73 82 +27 82 +47 82 +22 70 +22 50 +22 28 +22 52 +22 83 +8 22 +22 77 +22 81 +22 31 +22 64 +22 74 +22 89 +22 88 +22 27 +22 47 +44 45 +45 67 +45 70 +25 45 +3 45 +21 45 +45 80 +45 77 +44 70 +44 56 +28 44 +3 44 +44 83 +44 75 +44 84 +44 69 +44 79 +44 60 +4 44 +36 44 +44 73 +67 70 +50 67 +25 67 +21 67 +67 84 +67 79 +4 67 +67 73 +56 70 +50 70 +51 70 +25 70 +28 70 +52 70 +3 70 +21 70 +70 83 +8 70 +70 80 +70 77 +68 70 +70 75 +70 84 +70 81 +31 70 +69 70 +70 79 +64 70 +70 74 +60 70 +4 70 +70 89 +70 88 +36 70 +70 73 +27 70 +47 70 +51 56 +28 56 +56 83 +56 81 +56 64 +56 89 +27 56 +50 51 +25 50 +50 52 +21 50 +8 50 +50 81 +31 50 +50 64 +50 74 +50 89 +50 88 +27 50 +47 50 +28 51 +51 52 +51 83 +8 51 +51 77 +51 68 +25 28 +25 52 +25 80 +25 77 +25 84 +25 79 +4 25 +25 73 +28 52 +28 77 +28 68 +28 75 +28 84 +28 81 +28 31 +28 69 +28 79 +28 64 +28 74 +28 60 +4 28 +28 89 +28 88 +28 36 +28 73 +27 28 +28 47 +52 77 +52 68 +52 81 +31 52 +52 64 +52 74 +52 89 +52 88 +27 52 +47 52 +3 21 +3 83 +3 80 +3 77 +21 83 +8 21 +21 80 +21 77 +21 84 +21 79 +4 21 +21 73 +8 83 +77 83 +68 83 +75 83 +83 84 +81 83 +31 83 +69 83 +79 83 +64 83 +74 83 +60 83 +4 83 +83 89 +83 88 +36 83 +73 83 +27 83 +47 83 +8 77 +8 68 +8 81 +8 31 +8 64 +8 74 +8 89 +8 88 +8 27 +8 47 +77 80 +68 77 +75 77 +77 84 +77 81 +31 77 +69 77 +77 79 +64 77 +74 77 +60 77 +4 77 +77 89 +77 88 +36 77 +73 77 +27 77 +47 77 +75 84 +75 81 +81 84 +31 84 +31 81 +69 79 +64 69 +64 79 +74 79 +64 74 +4 60 +60 89 +4 89 +4 88 +88 89 +36 73 +27 36 +27 73 +47 73 +27 47 diff --git a/solvers/TCS-Meiji/test_instance/ex005.gr b/solvers/TCS-Meiji/test_instance/ex005.gr new file mode 100644 index 0000000..091b064 --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/ex005.gr @@ -0,0 +1,598 @@ +p tw 377 597 +258 295 +107 258 +258 299 +295 299 +4 359 +178 359 +4 89 +4 178 +126 267 +126 284 +126 261 +108 126 +136 267 +82 267 +108 267 +55 136 +136 164 +82 261 +55 82 +82 161 +188 350 +188 327 +8 236 +169 236 +31 236 +8 202 +8 169 +204 234 +22 234 +31 234 +309 336 +172 336 +204 336 +300 336 +78 309 +22 309 +140 172 +172 352 +12 140 +140 289 +117 140 +78 289 +78 292 +44 343 +44 289 +44 56 +44 292 +111 343 +111 232 +111 292 +111 353 +142 343 +343 369 +7 142 +7 254 +7 369 +73 341 +55 73 +73 161 +84 133 +84 333 +84 143 +14 84 +133 208 +61 133 +14 333 +331 333 +61 333 +281 372 +281 355 +222 281 +41 326 +36 326 +67 326 +244 372 +222 372 +41 244 +41 222 +101 244 +338 346 +63 338 +164 338 +118 338 +288 346 +118 346 +63 182 +21 63 +219 331 +36 219 +23 219 +160 341 +21 341 +288 370 +21 288 +160 194 +160 276 +19 296 +36 296 +23 296 +19 129 +19 138 +129 185 +76 129 +121 335 +76 121 +121 337 +76 99 +99 138 +99 156 +33 99 +40 370 +303 370 +335 368 +87 335 +42 130 +33 130 +87 130 +30 214 +30 42 +30 125 +30 87 +66 211 +66 109 +60 66 +66 228 +3 25 +25 123 +251 319 +125 319 +158 319 +94 251 +251 322 +112 179 +112 332 +112 322 +123 362 +201 362 +362 375 +141 321 +123 321 +200 321 +321 367 +216 246 +201 216 +210 216 +54 308 +54 377 +54 200 +308 377 +200 308 +246 367 +34 246 +277 347 +205 347 +131 347 +184 249 +93 184 +184 320 +50 249 +249 278 +50 93 +50 278 +95 290 +95 227 +95 270 +170 230 +230 277 +230 270 +60 273 +228 273 +215 273 +35 170 +170 371 +35 302 +35 100 +35 90 +79 351 +79 154 +79 181 +81 189 +88 189 +153 371 +153 280 +153 154 +38 371 +38 280 +38 64 +58 88 +58 304 +58 265 +285 306 +179 285 +104 285 +2 306 +306 307 +186 315 +302 315 +264 315 +186 264 +186 233 +122 210 +83 122 +122 187 +17 313 +90 313 +250 313 +48 49 +49 323 +49 96 +163 243 +163 328 +163 317 +96 163 +196 243 +86 243 +9 243 +72 334 +279 334 +310 334 +220 325 +279 325 +271 325 +167 279 +167 271 +167 235 +191 269 +139 191 +191 271 +197 374 +45 197 +197 250 +51 275 +45 51 +51 250 +17 374 +257 374 +262 349 +253 349 +139 349 +193 349 +139 262 +166 262 +275 366 +275 376 +159 257 +159 376 +159 217 +105 328 +86 328 +317 328 +39 105 +105 212 +39 91 +39 83 +212 263 +86 212 +91 316 +91 239 +91 247 +92 363 +137 363 +34 363 +46 92 +92 165 +46 177 +46 137 +46 318 +165 177 +177 291 +65 177 +114 293 +165 293 +27 293 +152 316 +187 316 +239 263 +192 239 +1 114 +114 152 +263 358 +198 226 +75 226 +124 226 +9 198 +80 198 +69 198 +1 27 +1 237 +152 274 +110 340 +110 274 +110 209 +192 209 +192 247 +237 340 +116 340 +52 248 +248 297 +248 373 +103 366 +206 366 +6 97 +6 145 +6 149 +6 360 +26 206 +26 238 +26 233 +26 113 +97 147 +147 272 +147 206 +147 149 +97 344 +29 155 +29 148 +29 144 +260 272 +268 272 +272 344 +37 71 +71 342 +71 297 +11 71 +37 260 +11 37 +37 144 +52 342 +52 297 +128 260 +77 260 +128 268 +11 128 +141 348 +102 141 +174 339 +287 339 +329 339 +174 218 +174 311 +11 268 +245 365 +18 245 +18 115 +115 259 +13 115 +18 365 +68 365 +57 259 +57 68 +57 294 +135 150 +135 175 +135 252 +24 354 +324 354 +213 354 +70 354 +134 171 +134 225 +134 146 +171 225 +171 207 +119 255 +16 255 +207 255 +43 175 +43 221 +43 47 +231 241 +231 305 +47 241 +241 305 +98 150 +150 356 +150 252 +107 169 +31 107 +89 119 +119 178 +89 211 +284 361 +12 284 +300 361 +352 361 +12 261 +12 352 +117 261 +108 195 +327 350 +164 195 +118 195 +55 182 +202 204 +202 300 +22 232 +22 292 +143 289 +117 143 +117 161 +143 224 +56 142 +56 224 +180 224 +161 208 +168 223 +208 223 +61 223 +14 180 +164 182 +168 331 +62 331 +180 355 +67 355 +67 222 +118 327 +109 211 +61 168 +168 276 +62 194 +62 185 +62 276 +190 194 +23 185 +185 337 +190 214 +190 337 +76 368 +101 138 +101 156 +33 368 +156 254 +40 303 +87 214 +2 42 +2 120 +3 123 +3 303 +94 125 +125 158 +94 120 +120 322 +179 332 +104 332 +104 210 +201 375 +367 375 +183 320 +183 270 +16 183 +59 93 +59 320 +59 278 +205 254 +254 302 +205 277 +227 290 +215 227 +60 109 +228 351 +215 351 +207 351 +302 307 +81 88 +181 371 +72 280 +64 72 +64 310 +307 323 +34 210 +264 323 +83 317 +48 233 +48 96 +28 48 +196 357 +96 196 +28 196 +100 220 +100 269 +220 269 +220 310 +265 279 +132 253 +253 271 +235 253 +74 304 +74 265 +74 256 +5 74 +256 304 +17 90 +45 257 +45 376 +166 282 +166 193 +103 376 +282 345 +193 282 +132 151 +132 193 +151 203 +203 265 +203 235 +151 286 +75 86 +32 357 +32 238 +28 32 +9 357 +85 357 +137 301 +283 301 +34 301 +283 364 +283 318 +266 364 +329 364 +65 318 +218 318 +165 187 +27 291 +20 291 +65 127 +65 229 +20 127 +127 229 +116 127 +75 358 +209 358 +80 124 +80 373 +15 69 +69 373 +124 209 +20 237 +247 274 +103 145 +103 149 +149 206 +145 217 +344 345 +10 345 +345 360 +53 238 +15 238 +10 286 +5 10 +53 155 +53 199 +113 155 +148 199 +148 342 +148 373 +15 199 +144 342 +102 348 +266 287 +218 266 +287 329 +218 229 +98 242 +98 324 +24 157 +24 252 +175 314 +106 259 +68 259 +13 106 +13 162 +106 312 +162 312 +173 312 +242 356 +176 356 +176 242 +240 242 +213 324 +240 324 +146 225 +15 85 +47 221 +157 298 +252 298 +70 298 +70 157 +300 352 +232 353 +353 369 +154 181 +229 311 +77 113 +77 144 +294 314 +131 330 +314 330 +162 173 diff --git a/solvers/TCS-Meiji/test_instance/ex007.gr b/solvers/TCS-Meiji/test_instance/ex007.gr new file mode 100644 index 0000000..21f4e0c --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/ex007.gr @@ -0,0 +1,452 @@ +p tw 137 451 +42 133 +63 133 +110 133 +89 133 +96 133 +38 133 +51 133 +46 80 +46 106 +46 76 +46 56 +6 46 +46 114 +46 111 +1 43 +1 135 +1 27 +1 112 +1 52 +1 114 +1 92 +1 81 +50 119 +50 93 +50 80 +43 50 +50 68 +50 116 +50 130 +50 83 +50 123 +50 105 +50 111 +50 92 +31 50 +50 60 +26 50 +32 50 +37 67 +37 85 +37 135 +37 130 +37 115 +37 128 +37 107 +2 37 +37 75 +37 39 +37 81 +26 37 +18 37 +14 37 +37 121 +37 41 +64 126 +64 73 +64 83 +64 107 +64 78 +13 64 +64 100 +64 86 +19 64 +64 132 +32 64 +64 121 +64 101 +20 64 +64 125 +64 74 +2 66 +66 100 +48 66 +15 66 +41 66 +66 125 +66 131 +47 66 +66 98 +44 86 +8 44 +44 103 +44 74 +44 47 +44 84 +44 57 +93 119 +80 119 +43 119 +68 119 +119 130 +83 119 +67 85 +67 135 +67 130 +67 115 +67 107 +2 67 +73 126 +83 126 +107 126 +78 126 +100 126 +86 126 +80 93 +43 93 +68 93 +93 130 +83 93 +85 135 +85 130 +85 115 +85 107 +2 85 +73 83 +73 107 +73 78 +73 100 +73 86 +43 80 +80 130 +80 83 +43 116 +43 130 +43 83 +130 135 +107 135 +2 135 +68 116 +128 130 +107 130 +2 130 +83 107 +83 100 +83 86 +115 128 +13 107 +100 107 +86 107 +2 100 +13 78 +82 91 +91 123 +87 91 +4 91 +3 21 +3 75 +3 29 +3 9 +16 25 +19 25 +7 25 +25 97 +82 123 +82 94 +82 87 +82 124 +4 82 +82 134 +21 75 +21 95 +21 29 +21 45 +9 21 +21 90 +16 19 +16 59 +7 16 +16 102 +16 97 +16 34 +76 106 +56 106 +6 106 +106 114 +106 111 +27 72 +27 112 +27 52 +27 114 +27 92 +27 81 +94 123 +87 123 +5 123 +72 123 +123 124 +123 127 +120 123 +4 123 +123 137 +123 134 +105 123 +111 123 +92 123 +31 123 +26 123 +32 123 +75 95 +29 75 +75 109 +75 120 +45 75 +75 117 +36 75 +9 75 +75 118 +75 90 +39 75 +75 81 +26 75 +18 75 +75 121 +41 75 +19 59 +7 19 +19 88 +19 36 +19 102 +19 53 +19 79 +19 97 +19 77 +19 34 +19 132 +19 32 +19 121 +19 101 +19 125 +19 74 +48 79 +15 48 +41 48 +48 125 +47 48 +48 98 +8 103 +8 74 +8 47 +8 84 +8 57 +5 72 +5 127 +5 120 +109 120 +109 117 +36 109 +36 88 +53 88 +79 88 +72 127 +72 120 +124 127 +4 124 +124 137 +124 134 +30 124 +105 124 +117 120 +36 120 +45 117 +9 45 +45 118 +45 90 +45 65 +39 45 +36 53 +36 79 +53 102 +97 102 +77 102 +34 102 +12 102 +102 132 +11 71 +11 42 +11 129 +11 69 +134 137 +30 137 +105 137 +90 118 +65 118 +39 118 +34 77 +12 77 +77 132 +42 71 +28 71 +71 129 +40 71 +69 71 +62 71 +30 105 +39 65 +12 132 +49 70 +49 108 +23 49 +49 89 +28 42 +42 129 +42 58 +23 42 +40 42 +35 42 +17 42 +42 69 +10 42 +42 62 +42 63 +42 110 +42 89 +42 96 +38 42 +42 51 +24 55 +17 24 +24 33 +24 136 +24 38 +56 76 +6 76 +76 114 +76 111 +52 112 +112 114 +92 112 +81 112 +105 111 +92 105 +31 105 +26 105 +32 105 +39 81 +26 39 +18 39 +39 121 +39 41 +32 132 +121 132 +101 132 +125 132 +74 132 +15 41 +15 125 +15 47 +15 98 +74 103 +47 103 +84 103 +57 103 +6 56 +56 114 +56 111 +6 114 +6 111 +52 114 +52 92 +52 81 +92 114 +81 114 +92 111 +111 122 +99 111 +26 111 +32 111 +60 92 +92 122 +26 92 +32 92 +26 81 +81 113 +70 81 +81 121 +41 81 +31 60 +60 122 +99 122 +99 108 +14 26 +26 113 +26 121 +26 41 +32 121 +32 104 +32 58 +32 125 +32 74 +14 18 +14 113 +70 113 +70 108 +23 70 +20 121 +104 121 +121 125 +74 121 +41 125 +41 54 +41 55 +41 47 +41 98 +20 101 +20 104 +58 104 +23 58 +35 58 +17 58 +125 131 +54 125 +47 125 +98 125 +47 74 +22 74 +61 74 +74 84 +57 74 +54 131 +54 55 +17 55 +33 55 +55 136 +22 47 +47 84 +47 57 +84 98 +22 61 +61 136 +23 108 +23 35 +17 23 +35 40 +40 69 +10 40 +40 62 +40 63 +17 33 +17 136 +10 62 +10 63 +63 110 +63 89 +63 96 +38 63 +51 63 +89 110 +38 110 +51 110 +38 89 +51 89 diff --git a/solvers/TCS-Meiji/test_instance/ex009.gr b/solvers/TCS-Meiji/test_instance/ex009.gr new file mode 100644 index 0000000..6659361 --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/ex009.gr @@ -0,0 +1,663 @@ +p tw 466 662 +423 457 +267 457 +432 457 +88 458 +88 432 +88 96 +128 163 +128 254 +128 227 +163 254 +369 423 +291 423 +130 175 +109 130 +130 349 +130 358 +369 436 +173 369 +109 175 +175 316 +109 333 +316 333 +333 335 +167 300 +167 459 +167 291 +44 222 +44 261 +44 326 +253 458 +384 458 +96 458 +253 384 +253 302 +384 441 +426 441 +39 426 +7 426 +39 441 +97 441 +39 272 +68 100 +68 281 +100 281 +58 100 +13 100 +140 455 +272 455 +97 455 +58 455 +140 281 +13 140 +131 272 +165 306 +131 165 +165 383 +306 385 +306 425 +385 425 +322 450 +310 450 +210 450 +310 322 +101 322 +67 310 +101 239 +93 239 +131 383 +50 383 +291 395 +336 395 +334 395 +433 443 +326 443 +209 443 +357 433 +334 433 +159 357 +357 418 +37 183 +37 178 +37 157 +37 69 +111 114 +114 193 +114 418 +111 184 +6 111 +183 442 +157 183 +222 400 +3 222 +134 217 +193 217 +217 453 +14 134 +14 302 +14 453 +178 325 +178 235 +154 325 +3 325 +184 442 +6 442 +235 442 +184 408 +134 244 +7 134 +154 389 +3 389 +332 389 +42 408 +235 408 +154 273 +42 64 +42 290 +42 244 +64 290 +64 126 +290 387 +244 424 +273 440 +273 372 +273 319 +61 440 +61 387 +40 61 +24 126 +24 263 +24 366 +352 440 +422 424 +263 422 +421 422 +286 338 +286 421 +286 378 +245 338 +245 285 +196 245 +338 378 +22 177 +22 146 +22 268 +177 340 +177 268 +50 331 +309 331 +331 370 +80 411 +80 345 +80 340 +80 309 +345 419 +202 419 +146 419 +210 411 +362 411 +169 412 +67 412 +362 412 +159 418 +213 311 +293 311 +213 293 +340 345 +180 358 +180 227 +229 301 +125 229 +229 446 +301 346 +346 446 +258 346 +141 271 +271 446 +271 339 +141 301 +141 317 +171 349 +349 381 +329 380 +135 329 +329 437 +380 435 +380 456 +112 400 +112 381 +112 277 +83 400 +83 332 +83 223 +76 148 +56 148 +148 319 +76 106 +54 76 +106 307 +54 106 +86 240 +20 86 +86 352 +240 352 +20 307 +221 307 +307 319 +20 221 +221 247 +75 221 +33 247 +12 247 +278 314 +314 391 +75 314 +278 391 +19 278 +19 202 +202 354 +19 361 +19 354 +135 437 +135 435 +136 139 +77 136 +70 136 +79 361 +75 79 +79 139 +344 361 +296 361 +283 354 +107 283 +283 292 +162 382 +342 382 +147 382 +121 435 +90 282 +90 170 +170 282 +170 287 +218 324 +324 372 +223 324 +218 323 +218 407 +25 320 +151 320 +204 320 +119 250 +119 407 +119 279 +25 35 +25 355 +35 250 +35 398 +1 47 +1 168 +1 232 +47 360 +47 194 +185 360 +360 394 +168 376 +164 168 +168 232 +351 376 +251 376 +185 420 +185 350 +194 394 +11 394 +152 194 +350 420 +103 420 +11 308 +11 31 +152 308 +233 308 +164 447 +60 164 +164 208 +144 447 +152 447 +60 351 +60 81 +284 350 +103 284 +192 284 +143 351 +351 359 +158 416 +365 416 +399 416 +233 416 +52 158 +31 158 +143 145 +143 330 +144 256 +144 197 +144 208 +81 359 +81 208 +81 117 +197 365 +201 365 +341 359 +321 399 +156 399 +242 294 +145 294 +262 294 +259 321 +201 321 +52 156 +156 401 +32 52 +124 256 +201 256 +256 461 +251 262 +251 466 +248 327 +145 248 +211 248 +94 198 +94 367 +94 249 +198 341 +186 198 +132 198 +161 341 +341 368 +405 448 +124 448 +289 448 +201 448 +259 405 +405 413 +121 456 +121 243 +43 327 +266 327 +327 368 +66 124 +66 461 +66 99 +85 124 +43 252 +43 241 +55 57 +41 55 +55 216 +57 449 +57 371 +266 367 +367 368 +252 266 +255 266 +289 413 +187 289 +85 289 +214 252 +252 463 +186 427 +186 249 +328 388 +328 353 +216 328 +275 427 +427 462 +18 415 +343 415 +214 415 +51 270 +226 270 +187 270 +270 454 +51 462 +51 454 +249 410 +255 410 +74 410 +18 241 +18 303 +241 466 +224 343 +214 224 +27 224 +63 303 +105 303 +174 343 +28 226 +226 347 +205 225 +205 255 +74 205 +174 212 +174 377 +63 464 +34 464 +53 464 +63 388 +225 439 +225 463 +195 225 +34 105 +34 46 +9 439 +219 439 +118 439 +17 108 +17 219 +17 195 +9 191 +9 275 +9 118 +46 53 +53 228 +191 347 +191 397 +108 460 +108 212 +104 219 +104 118 +104 160 +356 460 +228 460 +49 451 +2 49 +353 388 +138 344 +344 438 +72 127 +127 139 +127 296 +127 305 +4 265 +98 265 +4 98 +4 465 +92 98 +48 172 +48 200 +48 122 +72 452 +72 206 +260 304 +138 304 +215 304 +379 396 +203 396 +246 396 +5 234 +5 374 +5 386 +246 404 +110 404 +375 404 +95 406 +26 95 +95 318 +393 406 +26 406 +8 318 +8 220 +8 91 +393 414 +166 393 +45 414 +276 414 +288 299 +236 288 +288 431 +45 276 +402 430 +82 402 +188 402 +82 430 +137 295 +274 295 +113 295 +59 65 +59 274 +59 115 +65 115 +149 182 +87 182 +259 401 +267 432 +96 336 +227 358 +316 373 +373 459 +261 373 +316 337 +300 436 +300 459 +173 436 +337 392 +335 392 +261 392 +335 337 +7 302 +97 281 +13 58 +424 425 +50 101 +210 370 +93 101 +93 169 +153 169 +334 336 +293 326 +6 157 +193 453 +332 372 +126 387 +366 424 +263 366 +40 263 +40 421 +40 352 +196 285 +146 285 +309 370 +67 362 +69 209 +69 429 +209 429 +125 301 +258 446 +258 297 +297 339 +339 390 +297 390 +171 381 +171 317 +162 342 +56 162 +56 428 +54 428 +33 428 +139 312 +317 323 +456 466 +269 323 +179 223 +151 355 +250 355 +151 287 +123 250 +103 350 +152 197 +197 233 +31 32 +32 192 +145 242 +208 461 +330 368 +211 262 +117 461 +117 161 +99 117 +99 132 +243 298 +298 353 +41 298 +33 280 +280 371 +249 275 +187 189 +28 189 +255 463 +27 463 +74 275 +27 195 +28 155 +89 212 +89 377 +347 348 +155 348 +199 348 +160 356 +199 397 +29 228 +29 207 +78 207 +77 181 +2 451 +21 451 +21 82 +179 277 +216 449 +78 181 +147 279 +85 454 +269 409 +190 409 +190 204 +12 70 +232 237 +123 237 +237 398 +30 142 +142 264 +30 110 +176 305 +107 438 +38 465 +172 417 +10 172 +153 417 +16 452 +36 364 +36 92 +116 364 +116 445 +16 260 +200 234 +215 315 +315 386 +206 379 +203 379 +10 73 +73 150 +26 374 +264 444 +166 318 +166 220 +91 133 +133 230 +15 299 +238 299 +15 375 +23 238 +23 236 +236 431 +38 312 +176 274 +84 444 +71 84 +150 230 +122 292 +129 313 +231 313 +313 403 +129 137 +113 445 +102 231 +102 257 +231 257 +115 363 +71 434 +71 363 +120 149 +120 257 +62 434 +62 149 +87 149 +188 403 diff --git a/solvers/TCS-Meiji/test_instance/gr-only.gr b/solvers/TCS-Meiji/test_instance/gr-only.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/gr-only.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/test_instance/p-num-vertices-larger.gr b/solvers/TCS-Meiji/test_instance/p-num-vertices-larger.gr new file mode 100644 index 0000000..13b152b --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/p-num-vertices-larger.gr @@ -0,0 +1,5 @@ +p tw 6 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/test_instance/single-edge.gr b/solvers/TCS-Meiji/test_instance/single-edge.gr new file mode 100644 index 0000000..1870d43 --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/single-edge.gr @@ -0,0 +1,2 @@ +p tw 2 1 +1 2 diff --git a/solvers/TCS-Meiji/test_instance/single-vertex.gr b/solvers/TCS-Meiji/test_instance/single-vertex.gr new file mode 100644 index 0000000..f53426c --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/single-vertex.gr @@ -0,0 +1 @@ +p tw 1 0 diff --git a/solvers/TCS-Meiji/test_instance/two-vertices-2.gr b/solvers/TCS-Meiji/test_instance/two-vertices-2.gr new file mode 100644 index 0000000..114d4a9 --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/two-vertices-2.gr @@ -0,0 +1 @@ +p tw 2 0 diff --git a/solvers/TCS-Meiji/test_instance/two-vertices.gr b/solvers/TCS-Meiji/test_instance/two-vertices.gr new file mode 100644 index 0000000..114d4a9 --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/two-vertices.gr @@ -0,0 +1 @@ +p tw 2 0 diff --git a/solvers/TCS-Meiji/test_instance/web1.gr b/solvers/TCS-Meiji/test_instance/web1.gr new file mode 100644 index 0000000..9d46124 --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/web1.gr @@ -0,0 +1,7 @@ +c This file describes a path with five vertices and four edges. +p tw 5 4 +1 2 +2 3 +c we are half-way done with the instance definition. +3 4 +4 5 diff --git a/solvers/TCS-Meiji/test_instance/web2.gr b/solvers/TCS-Meiji/test_instance/web2.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/web2.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/test_instance/web3.gr b/solvers/TCS-Meiji/test_instance/web3.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/web3.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/test_instance/web4.gr b/solvers/TCS-Meiji/test_instance/web4.gr new file mode 100644 index 0000000..0b1381b --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/web4.gr @@ -0,0 +1,5 @@ +p tw 5 4 +1 2 +2 3 +3 4 +4 5 diff --git a/solvers/TCS-Meiji/test_instance/wedge.gr b/solvers/TCS-Meiji/test_instance/wedge.gr new file mode 100644 index 0000000..c28f165 --- /dev/null +++ b/solvers/TCS-Meiji/test_instance/wedge.gr @@ -0,0 +1,3 @@ +p tw 3 2 +1 2 +2 3 diff --git a/solvers/TCS-Meiji/tw-exact b/solvers/TCS-Meiji/tw-exact new file mode 100644 index 0000000..471ab91 --- /dev/null +++ b/solvers/TCS-Meiji/tw-exact @@ -0,0 +1,3 @@ +#!/bin/sh +# +java -Xmx30g -Xms30g -Xss10m tw.exact.MainDecomposer diff --git a/solvers/TCS-Meiji/tw-heuristic b/solvers/TCS-Meiji/tw-heuristic new file mode 100644 index 0000000..b29d009 --- /dev/null +++ b/solvers/TCS-Meiji/tw-heuristic @@ -0,0 +1,29 @@ +#!/bin/bash + +JFLAGS="-Xmx30g -Xms30g -Xss1g" + +tmp="/tmp/tmp_input"_"$$" +trap 'rm -f $tmp' EXIT +cat > $tmp + +seed=42 +while getopts s: OPT +do + case $OPT in + s) + seed=$OPTARG + ;; + esac +done + +java $JFLAGS tw.heuristic.MainDecomposer -s $seed < $tmp & + +PID=$! +trap 'kill -SIGTERM $PID' SIGTERM +trap 'kill -SIGKILL -- -$PID &> /dev/null; rm -f $tmp' EXIT + +while : +do + wait $PID + kill -0 $PID 2> /dev/null || break +done diff --git a/solvers/TCS-Meiji/tw/exact/Bag.java b/solvers/TCS-Meiji/tw/exact/Bag.java new file mode 100644 index 0000000..2a2cfcd --- /dev/null +++ b/solvers/TCS-Meiji/tw/exact/Bag.java @@ -0,0 +1,532 @@ +/* + * Copyright (c) 2017, Hisao Tamaki + */ + +package tw.exact; + +import java.util.ArrayList; + +import java.util.Arrays; + +public class Bag implements Comparable{ + Bag parent; + XBitSet vertexSet; + int size; + Graph graph; + int conv[]; + int inv[]; + ArrayList nestedBags; + ArrayList separators; + ArrayList incidentSeparators; + int width; + int separatorWidth; + int lowerBound; + int inheritedLowerBound; + boolean optimal; + + SafeSeparator ss; + + public Bag(Graph graph) { + this(null, graph.all); + this.graph = graph; + } + + public Bag(Bag parent, XBitSet vertexSet) { + this.parent = parent; + this.vertexSet = vertexSet; + size = vertexSet.cardinality(); + incidentSeparators = new ArrayList<>(); + } + + public void initializeForDecomposition() { + if (graph == null) { + if (parent == null) { + throw new RuntimeException("graph not available for decomposition"); + } + else { + makeLocalGraph(); + } + } + nestedBags = new ArrayList<>(); + separators = new ArrayList<>(); + width = 0; + separatorWidth = 0; + } + + public void attachSeparator(Separator separator) { + incidentSeparators.add(separator); + } + + public void makeRefinable() { + makeLocalGraph(); + nestedBags = new ArrayList<>(); + separators = new ArrayList<>(); + } + + public int maxNestedBagSize() { + if (nestedBags != null) { + int max = 0; + for (Bag bag:nestedBags) { + if (bag.size > max) { + max = bag.size; + } + } + return max; + } + return -1; + } + + public Bag addNestedBag(XBitSet vertexSet) { + Bag bag = new Bag(this, vertexSet); + nestedBags.add(bag); + return bag; + } + + public Separator addSeparator(XBitSet vertexSet) { + Separator separator = new Separator(this, vertexSet); + separators.add(separator); + return separator; + } + + public void addIncidentSeparator(Separator separator) { + incidentSeparators.add(separator); + } + + private void makeLocalGraph() { + graph = new Graph(size); + conv = new int[parent.size]; + inv = new int[size]; + + XBitSet vertexSet = this.vertexSet; + + int k = 0; + for (int v = 0; v < parent.size; v++) { + if (vertexSet.get(v)) { + conv[v] = k; + inv[k++] = v; + } + else { + conv[v] = -1; + } + } + + graph.inheritEdges(parent.graph, conv, inv); + +// System.out.println("filling all, " + incidentSeparators.size() + " incident separators"); + for (Separator separator: incidentSeparators) { +// System.out.println("filling " + separator); + graph.fill(convert(separator.vertexSet, conv)); + } + } + + public int getWidth() { + if (nestedBags == null) { + return size - 1; + } + int max = 0; + for (Bag bag: nestedBags) { + int w = bag.getWidth(); + if (w > max) { + max = w; + } + } + for (Separator separator: separators) { + int w = separator.vertexSet.cardinality(); + if (w > max) { + max = w; + } + } + return max; + + } + + public void setWidth() { + // assumes that the bag is flat + +// System.out.println("setWidth for " + this.vertexSet); +// System.out.println("nestedBags = " + nestedBags); + + if (nestedBags == null) { + width = size - 1; + separatorWidth = 0; + return; + } + + width = 0; + separatorWidth = 0; + + for (Bag bag: nestedBags) { + if (bag.size - 1 > width) { + width = bag.size - 1; + } + } + + for (Separator separator: separators) { + if (separator.size > separatorWidth) { + separatorWidth = separator.size; + } + } + + if (separatorWidth > width) { + width = separatorWidth; + } + } + + public void flatten() { + if (nestedBags == null) { + return; + } + + validate(); + for (Bag bag: nestedBags) { + if (bag.nestedBags != null) { + bag.flatten(); + } + } + validate(); + ArrayList newSeparatorList = new ArrayList<>(); + for (Separator separator: separators) { +// System.out.println(separator.incidentBags.size() + " incident bags of " + +// separator); + ArrayList newIncidentBags = new ArrayList<>(); + for (Bag bag: separator.incidentBags) { + if (bag.parent == this && bag.nestedBags != null && + !bag.nestedBags.isEmpty()) { + Bag nested = bag.findNestedBagContaining( + convert(separator.vertexSet, bag.conv)); + if (nested == null) { + bag.dump(); + System.out.println(" does not have a bag containing " + + convert(separator.vertexSet, bag.conv) + " which is originally " + + separator.vertexSet); + this.dump(); + } + + newIncidentBags.add(nested); + nested.addIncidentSeparator(separator); + } + else { + newIncidentBags.add(bag); + } + } + if (!newIncidentBags.isEmpty()) { + separator.incidentBags = newIncidentBags; + newSeparatorList.add(separator); + } +// System.out.println("processed separator :" + separator); + } + separators = newSeparatorList; + + ArrayList temp = nestedBags; + nestedBags = new ArrayList<>(); + for (Bag bag: temp) { + if (bag.nestedBags != null && !bag.nestedBags.isEmpty()) { + for (Bag nested: bag.nestedBags) { +// System.out.println("inverting " + nested); + nested.invert(); + nestedBags.add(nested); +// System.out.println("inverted " + nested); + } + for (Separator separator: bag.separators) { +// System.out.println("inverting sep " + separator); + separator.invert(); + this.separators.add(separator); +// System.out.println("inverted sep " + separator); + } + } + else { +// System.out.println("adding original bag " + bag.vertexSet); + nestedBags.add(bag); + } + } + setWidth(); +// System.out.println("bag of size " + size + " flattened into " + nestedBags.size() + " bags and width " + +// width); +// for (Bag bag: nestedBags) { +// System.out.println("incident separators of " + bag.vertexSet); +// for (Separator s: bag.incidentSeparators) { +// System.out.println(" " + s.vertexSet); +// for (Bag b: s.incidentBags) { +// System.out.println(" " + b.vertexSet); +// } +// } +// } + } + + public Bag findNestedBagContaining(XBitSet vertexSet) { + for (Bag bag: nestedBags) { + if (vertexSet.isSubset(bag.vertexSet)) { + return bag; + } + } + return null; + } + + public void invert() { + vertexSet = convert(vertexSet, parent.inv); + parent = parent.parent; + } + + public void convert() { + vertexSet = convert(vertexSet, parent.conv); + } + + public XBitSet convert(XBitSet s) { + return convert(s, conv); + } + + private XBitSet convert(XBitSet s, int[] conv) { + if (conv.length < s.length()) { + return null; + } + XBitSet result = new XBitSet(); + for (int v = s.nextSetBit(0); v >= 0; + v = s.nextSetBit(v + 1)) { + result.set(conv[v]); + } + return result; + } + + public TreeDecomposition toTreeDecomposition() { + setWidth(); + TreeDecomposition td = new TreeDecomposition(0, width, graph); + for (Bag bag: nestedBags) { + td.addBag(bag.vertexSet.toArray()); + } + + for (Separator separator: separators) { + XBitSet vs = separator.vertexSet; + Bag full = null; + for (Bag bag: separator.incidentBags) { + if (vs.isSubset(bag.vertexSet)) { + full = bag; + break; + } + } + + if (full != null) { + int j = nestedBags.indexOf(full) + 1; + for (Bag bag: separator.incidentBags) { + + if (bag != full) { + td.addEdge(j, nestedBags.indexOf(bag) + 1); + } + } + } + else { + int j = td.addBag(separator.vertexSet.toArray()); + for (Bag bag: separator.incidentBags) { + td.addEdge(j, nestedBags.indexOf(bag) + 1); + } + } + } + + return td; + } + + public void detectSafeSeparators() { + ss = new SafeSeparator(graph); + for (Separator separator: separators) { +// separator.figureOutSafetyBySPT(); + separator.figureOutSafety(ss); + } + } + + public void pack() { + ArrayList newBagList = new ArrayList<>(); + for (Bag bag: nestedBags) { + if (bag.parent == this) { + ArrayList bagsToPack = new ArrayList<>(); + bag.collectBagsToPack(bagsToPack, null); +// System.out.println("bags to pack: " + bagsToPack); + if (bagsToPack.size() >= 2) { + XBitSet vertexSet = new XBitSet(graph.n); + for (Bag toPack: bagsToPack) { + vertexSet.or(toPack.vertexSet); + } + Bag packed = new Bag(this, vertexSet); + packed.initializeForDecomposition(); + packed.nestedBags = bagsToPack; + for (Bag toPack: bagsToPack) { + toPack.parent = packed; + toPack.convert(); + } + newBagList.add(packed); + } + else { + newBagList.add(bag); + } + } + } + nestedBags = newBagList; + + ArrayList newSeparatorList = new ArrayList<>(); + + for (Separator separator: separators) { + boolean internal = true; + Bag parent = null; + for (Bag b: separator.incidentBags) { + if (b.parent == this) { + internal = false; + break; + } + else if (parent == null) { + parent = b.parent; + } + else if (b.parent != parent) { + internal = false; + break; + } + } + if (internal) { + separator.parent = parent; + separator.convert(); + parent.separators.add(separator); + } + else { + ArrayList newIncidentBags = new ArrayList<>(); + for (Bag b: separator.incidentBags) { + if (b.parent == this) { + newIncidentBags.add(b); + } + else { + newIncidentBags.add(b.parent); + b.parent.incidentSeparators.add(separator); + b.incidentSeparators.remove(separator); + } + } + separator.incidentBags = newIncidentBags; + newSeparatorList.add(separator); + } + } + + separators = newSeparatorList; + + for (Bag bag: nestedBags) { + bag.setWidth(); + } + setWidth(); + } + + public void collectBagsToPack(ArrayList list, Separator from) { + list.add(this); + for (Separator separator: incidentSeparators) { +// System.out.println(" safe = " + separator.safe); + if (separator == from || separator.safe || separator.wall) { + continue; + } + separator.collectBagsToPack(list, this); + } + } + + public int countSafeSeparators() { + int count = 0; + for (Separator separator: separators) { + if (separator.safe) { + count++; + } + } + return count; + } + + public void dump() { + dump(""); + } + + public void validate() { + if (nestedBags != null) { +// assert !nestedBags.isEmpty() : "no nested bags " + this; + for (Bag b: nestedBags) { + b.validate(); + assert !b.vertexSet.isEmpty(): "empty bag " + b; + assert b.parent == this: "parent of " + b + + "\n which is " + b.parent + + "\n is supposed to be " + this; + } + for (Separator s: separators) { + assert !s.vertexSet.isEmpty(): "empty seprator " + s; + assert s.parent == this: "parent of " + s + + "\n which is " + s.parent + + "\n is supposed to be " + this; + } + for (Bag b: nestedBags) { + for (Separator s: b.incidentSeparators) { + assert !s.vertexSet.isEmpty(): "empty seprator " + s + + "\n incident to " + b; + assert s.parent == this: "parent of " + s + + "\n which is " + s.parent + + "\n is supposed to be " + this + + "\n where the separator is incident to bag " + b; + assert s.vertexSet.isSubset(b.vertexSet): "separator vertex set " + s.vertexSet + + "\n is not a subset of the bag vertex set " + b.vertexSet; + } + } + for (Separator separator: separators) { + for (Bag b: separator.incidentBags) { + assert b != null; + assert b.parent == this: "parent of " + b + + "\n which is " + b.parent + + "\n is supposed to be " + this + + "\n where the bag is incident to separator " + separator; + assert separator.vertexSet.isSubset(b.vertexSet): "separator vertex set " + + separator.vertexSet + + "\n is not a subset of the bag vertex set " + b.vertexSet; + } + } + } + } + + private void dump(String indent) { + System.out.println(indent + "bag:" + vertexSet); + System.out.print(indent + "width = " + width + ", conv = "); + System.out.println(Arrays.toString(conv)); + if (nestedBags != null) { + System.out.println(indent + nestedBags.size() + " subbags:"); + for (Bag bag: nestedBags) { + bag.dump(indent + " "); + } + for (Separator separator: separators) { + separator.dump(indent + " "); + } + } + } + + public void canonicalize() { + boolean moving = true; + while (moving = true) { + moving = false; + for (Bag bag: nestedBags) { + if (bag.trySplit()) { + moving = true; + } + } + if (moving) { + flatten(); + } + } + } + + private boolean trySplit() { + return false; + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + if (parent != null) { + sb.append("bag" + parent.nestedBags.indexOf(this) + ":"); + } + else { + sb.append("root bag :"); + } + sb.append(vertexSet); + return sb.toString(); + } + + @Override + public int compareTo(Bag b) { + if (size != b.size) { + return b.size - size; + } + return XBitSet.ascendingComparator.compare(b.vertexSet, vertexSet); + } +} diff --git a/solvers/TCS-Meiji/tw/exact/BlockSieve.java b/solvers/TCS-Meiji/tw/exact/BlockSieve.java new file mode 100644 index 0000000..83b0207 --- /dev/null +++ b/solvers/TCS-Meiji/tw/exact/BlockSieve.java @@ -0,0 +1,866 @@ +/* + * Copyright (c) 2017, Hisao Tamaki and Hiromu Otsuka + */ + +package tw.exact; + +import java.io.PrintStream; + +import java.util.ArrayList; +import java.util.Arrays; + +public class BlockSieve{ + private static final String spaces64 = + " "; + public static final int MAX_CHILDREN_SIZE = 512; + private Node root; + private int n; + private int last; + private int targetWidth; + private int margin; + private int size; + + private abstract class Node{ + protected int index; + protected int width; + protected int ntz; + protected Node[] children; + protected XBitSet[] values; + protected int[] cardinalities; + + protected Node(int index, int width, int ntz){ + this.index = index; + this.width = width; + this.ntz = ntz; + } + + public abstract int indexOf(long label); + public abstract int add(long label); + public abstract int size(); + public abstract long getLabelAt(int i); + + public long getMask(){ + return Unsigned.consecutiveOneBit(ntz, ntz + width); + } + + public int add(long label, Node child){ + int i = add(label); + if(children != null){ + children = Arrays.copyOf(children, children.length + 1); + for(int j = children.length - 1; j - 1 >= i; j--){ + children[j] = children[j - 1]; + } + children[i] = child; + return i; + } + else{ + children = new Node[1]; + children[0] = child; + return 0; + } + } + + public int add(long label, XBitSet value){ + return add(label, value, value.cardinality()); + } + + public int add(long label, XBitSet value, int cardinality){ + int i = add(label); + if(values != null && cardinalities != null){ + values = Arrays.copyOf(values, values.length + 1); + for(int j = values.length - 1; j - 1 >= i; j--){ + values[j] = values[j - 1]; + } + values[i] = value; + cardinalities = + Arrays.copyOf(cardinalities, cardinalities.length + 1); + for(int j = cardinalities.length - 1; j - 1 >= i; j--){ + cardinalities[j] = cardinalities[j - 1]; + } + cardinalities[i] = cardinality; + return i; + } + else{ + values = new XBitSet[1]; + values[0] = value; + cardinalities = new int[1]; + cardinalities[0] = cardinality; + return 0; + } + } + + public boolean isLeaf(){ + return index == last && isLastInInterval(); + } + + public boolean isLastInInterval(){ + return ntz + width == 64; + } + + public abstract void filterSuperblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< XBitSet > list); + + public abstract void filterSubblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< XBitSet > list); + + public abstract void dump(PrintStream ps, String indent); + } + + private class ByteNode extends Node{ + private byte[] labels; + + public ByteNode(int index, int width, int ntz){ + super(index, width, ntz); + assert(width <= 8); + labels = new byte[0]; + } + + @Override + public int size(){ + return labels.length; + } + + @Override + public long getLabelAt(int i){ + return Unsigned.toUnsignedLong(labels[i]) << ntz; + } + + @Override + public int indexOf(long label){ + return Unsigned.binarySearch(labels, + Unsigned.byteValue((label & getMask()) >>> ntz)); + } + + @Override + public int add(long label){ + int i = -(indexOf(label)) - 1; + labels = Arrays.copyOf(labels, labels.length + 1); + for(int j = labels.length - 1; j - 1 >= i; j--){ + labels[j] = labels[j - 1]; + } + labels[i] = Unsigned.byteValue((label & getMask()) >>> ntz); + return i; + } + + @Override + public void dump(PrintStream ps, String indent){ + for(int i = 0; i < labels.length; i++){ + ps.println(indent + Long.toBinaryString(labels[i])); + if(!isLeaf()){ + children[i].dump(ps, indent + spaces64); + } + } + } + + @Override + public void filterSuperblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< XBitSet > list){ + long mask = getMask(); + int bits = 0; + if(index < longs.length){ + bits = Unsigned.intValue(((longs[index] & mask) >>> ntz)); + } + + int neighb = 0; + if(index < neighbors.length){ + neighb = Unsigned.intValue((neighbors[index] & mask) >>> ntz); + } + + if(isLeaf()){ + for(int i = labels.length - 1; i >= 0; i--){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = labels.length - 1; i >= 0; i--){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSuperblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + + @Override + public void filterSubblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< XBitSet > list) { + long mask = getMask(); + int bits = 0; + if(index < longs.length){ + bits = Unsigned.intValue((longs[index] & mask) >>> ntz); + } + + int neighb = 0; + if(index < neighbors.length){ + neighb = Unsigned.intValue((neighbors[index] & mask) >>> ntz); + } + + if(isLeaf()){ + for(int i = 0; i < labels.length; i++){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = 0; i < labels.length; i++){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSubblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + } + + private class ShortNode extends Node{ + private short[] labels; + + public ShortNode(int index, int width, int ntz){ + super(index, width, ntz); + assert(width <= 16); + labels = new short[0]; + } + + @Override + public int size(){ + return labels.length; + } + + @Override + public long getLabelAt(int i){ + return Unsigned.toUnsignedLong(labels[i]) << ntz; + } + + @Override + public int indexOf(long label){ + return Unsigned.binarySearch(labels, + Unsigned.shortValue((label & getMask()) >>> ntz)); + } + + @Override + public int add(long label){ + int i = -(indexOf(label)) - 1; + labels = Arrays.copyOf(labels, labels.length + 1); + for(int j = labels.length - 1; j - 1 >= i; j--){ + labels[j] = labels[j - 1]; + } + labels[i] = Unsigned.shortValue(((label & getMask()) >>> ntz)); + return i; + } + + @Override + public void dump(PrintStream ps, String indent){ + for (int i = 0; i < labels.length; i++) { + ps.println(indent + Long.toBinaryString(labels[i])); + if (!isLeaf()) { + children[i].dump(ps, indent + spaces64); + } + } + } + + @Override + public void filterSuperblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< XBitSet > list){ + long mask = getMask(); + int bits = 0; + if(index < longs.length){ + bits = Unsigned.intValue((longs[index] & mask) >>> ntz); + } + + int neighb = 0; + if(index < neighbors.length){ + neighb = Unsigned.intValue((neighbors[index] & mask) >>> ntz); + } + + if(isLeaf()){ + for(int i = labels.length - 1; i >= 0; i--){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = labels.length - 1; i >= 0; i--){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSuperblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + + @Override + public void filterSubblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< XBitSet > list) { + long mask = getMask(); + int bits = 0; + if(index < longs.length){ + bits = Unsigned.intValue((longs[index] & mask) >>> ntz); + } + + int neighb = 0; + if(index < neighbors.length){ + neighb = Unsigned.intValue((neighbors[index] & mask) >>> ntz); + } + + if(isLeaf()){ + for(int i = 0; i < labels.length; i++){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = 0; i < labels.length; i++){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSubblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + } + + private class IntegerNode extends Node{ + private int[] labels; + + public IntegerNode(int index, int width, int ntz){ + super(index, width, ntz); + assert(width <= 32); + labels = new int[0]; + } + + @Override + public int size(){ + return labels.length; + } + + @Override + public long getLabelAt(int i){ + return Integer.toUnsignedLong(labels[i]) << ntz; + } + + @Override + public int indexOf(long label){ + return Unsigned.binarySearch(labels, + Unsigned.intValue((label & getMask()) >>> ntz)); + } + + @Override + public int add(long label){ + int i = -(indexOf(label)) - 1; + labels = Arrays.copyOf(labels, labels.length + 1); + for(int j = labels.length - 1; j - 1 >= i; j--){ + labels[j] = labels[j - 1]; + } + labels[i] = Unsigned.intValue((label & getMask()) >>> ntz); + return i; + } + + @Override + public void dump(PrintStream ps, String indent){ + for (int i = 0; i < labels.length; i++) { + ps.println(indent + Long.toBinaryString(labels[i])); + if (!isLeaf()) { + children[i].dump(ps, indent + spaces64); + } + } + } + + @Override + public void filterSuperblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< XBitSet > list){ + long mask = getMask(); + int bits = 0; + if(index < longs.length){ + bits = Unsigned.intValue((longs[index] & mask) >>> ntz); + } + + int neighb = 0; + if(index < neighbors.length){ + neighb = Unsigned.intValue((neighbors[index] & mask) >>> ntz); + } + + if(isLeaf()){ + for(int i = labels.length - 1; i >= 0; i--){ + int label = labels[i]; + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = labels.length - 1; i >= 0; i--){ + int label = labels[i]; + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSuperblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + + @Override + public void filterSubblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< XBitSet > list) { + long mask = getMask(); + int bits = 0; + if(index < longs.length){ + bits = Unsigned.intValue((longs[index] & mask) >>> ntz); + } + + int neighb = 0; + if(index < neighbors.length){ + neighb = Unsigned.intValue((neighbors[index] & mask) >>> ntz); + } + + if(isLeaf()){ + for(int i = 0; i < labels.length; i++){ + int label = labels[i]; + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = 0; i < labels.length; i++){ + int label = labels[i]; + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSubblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + } + + private class LongNode extends Node{ + private long[] labels; + + public LongNode(int index){ + this(index, 64, 0); + } + + private LongNode(int index, int width, int ntz){ + super(index, width, ntz); + assert(width <= 64); + labels = new long[0]; + } + + @Override + public int size(){ + return labels.length; + } + + @Override + public long getLabelAt(int i){ + return labels[i] << ntz; + } + + @Override + public int indexOf(long label){ + return Unsigned.binarySearch(labels, (label & getMask()) >>> ntz); + } + + @Override + public int add(long label){ + int i = -(indexOf(label)) - 1; + labels = Arrays.copyOf(labels, labels.length + 1); + for(int j = labels.length - 1; j - 1 >= i; j--){ + labels[j] = labels[j - 1]; + } + labels[i] = ((label & getMask()) >>> ntz); + return i; + } + + @Override + public void dump(PrintStream ps, String indent){ + for(int i = 0; i < labels.length; i++){ + ps.println(indent + Long.toBinaryString(labels[i])); + if(!isLeaf()){ + children[i].dump(ps, indent + spaces64); + } + } + } + + @Override + public void filterSuperblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< XBitSet > list){ + long mask = getMask(); + long bits = 0; + if(index < longs.length){ + bits = (longs[index] & mask) >>> ntz; + } + + long neighb = 0; + if(index < neighbors.length){ + neighb = (neighbors[index] & mask) >>> ntz; + } + + if(isLeaf()){ + for(int i = labels.length - 1; i >= 0; i--){ + long label = labels[i]; + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Long.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = labels.length - 1; i >= 0; i--){ + long label = labels[i]; + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Long.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSuperblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + + @Override + public void filterSubblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< XBitSet > list) { + long mask = getMask(); + long bits = 0; + if(index < longs.length){ + bits = (longs[index] & mask) >>> ntz; + } + + long neighb = 0; + if(index < neighbors.length){ + neighb = (neighbors[index] & mask) >>> ntz; + } + + if(isLeaf()){ + for(int i = 0; i < labels.length; i++){ + long label = labels[i]; + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Long.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = 0; i < labels.length; i++){ + long label = labels[i]; + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Long.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSubblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + } + + public BlockSieve(int n, int targetWidth, int margin){ + this.n = n; + this.targetWidth = targetWidth; + this.margin = margin; + root = new LongNode(0); + last = (n - 1) / 64; + } + + public XBitSet put(XBitSet bs, XBitSet value){ + long longs[] = bs.toLongArray(); + Node node = root, parent = null; + + int i = 0, j1 = 0; + long bits = 0; + for(;;){ + bits = 0; + if(i < longs.length){ + bits = longs[i]; + } + int j = node.indexOf(bits); + if(j < 0){ + break; + } + if(node.isLeaf()){ + return node.values[j]; + } + parent = node; + node = node.children[j]; + i = node.index; + j1 = j; + } + + if(node.isLeaf()){ + node.add(bits, value); + } + else if(node.isLastInInterval()){ + node.add(bits, newPath(i + 1, longs, value)); + } + else{ + Node header = newNode( + i, 64 - (node.ntz + node.width), node.ntz + node.width); + if(!header.isLeaf()){ + header.add(bits, newPath(i + 1, longs, value)); + } + else{ + header.add(bits, value); + } + node.add(bits, header); + } + + ++size; + if(parent != null){ + parent.children[j1] = tryWidthResizing(node); + } + else{ + root = tryWidthResizing(node); + } + + return null; + } + + private Node tryWidthResizing(Node node){ + if(node.size() > MAX_CHILDREN_SIZE){ + Node node1 = resizeWidth(node); + for(int i = 0; i < node1.children.length; i++){ + node1.children[i] = tryWidthResizing(node1.children[i]); + } + return node1; + } + return node; + } + + private Node resizeWidth(Node node){ + int w = node.width, leng = node.size(); + long m = node.getMask(); + + long[] l = new long[leng]; + int ntz = Long.numberOfTrailingZeros(m); + int t = ntz + node.width; + while(l.length > MAX_CHILDREN_SIZE){ + t = (ntz + t) / 2; + m = Unsigned.consecutiveOneBit(ntz, t); + int p = 0; + for(int i = 0; i < leng; i++){ + long label = ((node.getLabelAt(i) & m) >>> ntz); + int j = Unsigned.binarySearch(l, 0, p, label); + if(j < 0){ + j = -j - 1; + for(int k = p; k - 1 >= j; k--){ + l[k] = l[k - 1]; + } + l[j] = label; + ++p; + } + } + l = Arrays.copyOfRange(l, 0, p); + } + + Node[] c = new Node[l.length]; + for(int i = 0; i < c.length; i++){ + long msk = node.getMask() & ~m; + c[i] = newNode(node.index, + Long.bitCount(msk), Long.numberOfTrailingZeros(msk)); + } + + for(int i = 0; i < leng; i++){ + long label = node.getLabelAt(i); + int j = Unsigned.binarySearch(l, ((label & m) >>> ntz)); + if(!node.isLeaf()){ + c[j].add(label, node.children[i]); + } + else{ + c[j].add(label, + node.values[i], node.cardinalities[i]); + } + } + + Node n1 = newNode(node.index, + Long.bitCount(m), Long.numberOfTrailingZeros(m)); + + for(int i = 0; i < l.length; i++){ + n1.add(l[i] << ntz, c[i]); + } + + return n1; + } + + private Node newNode(int index, int width, int ntz){ + if(width > 32){ + return new LongNode(index, width, ntz); + } + else if(width > 16){ + return new IntegerNode(index, width, ntz); + } + else if(width > 8){ + return new ShortNode(index, width, ntz); + } + else{ + return new ByteNode(index, width, ntz); + } + } + + private Node newPath(int index, long[] longs, XBitSet value){ + Node node = new LongNode(index); + + long bits = 0; + if(index < longs.length){ + bits = longs[index]; + } + + if(index == last){ + node.add(bits, value); + } + else{ + node.add(bits, newPath(index + 1, longs, value)); + } + + return node; + } + + public void collectSuperblocks( + XBitSet component, XBitSet neighbors, ArrayList< XBitSet > list){ + root.filterSuperblocks(component.toLongArray(), + neighbors.toLongArray(), 0, list); + } + + public void collectSubblocks( + XBitSet component, XBitSet neighbors, ArrayList< XBitSet > list){ + root.filterSubblocks(component.toLongArray(), + neighbors.toLongArray(), 0, list); + } + + public int size(){ + return size; + } + + public void dump(PrintStream ps){ + root.dump(ps, ""); + } + + public static void main(String args[]){ + } +} diff --git a/solvers/TCS-Meiji/tw/exact/Graph.java b/solvers/TCS-Meiji/tw/exact/Graph.java new file mode 100644 index 0000000..9ce68b6 --- /dev/null +++ b/solvers/TCS-Meiji/tw/exact/Graph.java @@ -0,0 +1,990 @@ +/* + * Copyright (c) 2016, Hisao Tamaki + */ +package tw.exact; + +import java.io.BufferedReader; + +import java.io.File; +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.FileReader; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.PrintStream; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.BitSet; +import java.util.Random; + +/** + * This class provides a representation of undirected simple graphs. + * The vertices are identified by non-negative integers + * smaller than {@code n} where {@code n} is the number + * of vertices of the graph. + * The degree (the number of adjacent vertices) of each vertex + * is stored in an array {@code degree} indexed by the vertex number + * and the adjacency lists of each vertex + * is also referenced from an array {@code neighbor} indexed by + * the vertex number. These arrays as well as the int variable {@code n} + * are public to allow easy access to the graph content. + * Reading from and writing to files as well as some basic + * graph algorithms, such as decomposition into connected components, + * are provided. + * + * @author Hisao Tamaki + */ +public class Graph { + /** + * number of vertices + */ + public int n; + + /** + * array of vertex degrees + */ + public int[] degree; + + /** + * array of adjacency lists each represented by an integer array + */ + public int[][] neighbor; + + /** + * set representation of the adjacencies. + * {@code neighborSet[v]} is the set of vertices + * adjacent to vertex {@code v} + */ + public XBitSet[] neighborSet; + + /** + * the set of all vertices, represented as an all-one + * bit vector + */ + public XBitSet all; + + /* + * variables used in the DFS aglgorithms fo + * connected componetns and + * biconnected components. + */ + private int nc; + private int mark[]; + private int dfn[]; + private int low[]; + private int dfCount; + private XBitSet articulationSet; + + /** + * Construct a graph with the specified number of + * vertices and no edges. Edges will be added by + * the {@code addEdge} method + * @param n the number of vertices + */ + public Graph(int n) { + this.n = n; + this.degree = new int[n]; + this.neighbor = new int[n][]; + this.neighborSet = new XBitSet[n]; + for (int i = 0; i < n; i++) { + neighborSet[i] = new XBitSet(n); + } + this.all = new XBitSet(n); + for (int i = 0; i < n; i++) { + all.set(i); + } + } + + /** + * Add an edge between two specified vertices. + * This is done by adding each vertex to the adjacent list + * of the other. + * No effect if the specified edge is already present. + * @param u vertex (one end of the edge) + * @param v vertex (the other end of the edge) + */ + public void addEdge(int u, int v) { + addToNeighbors(u, v); + addToNeighbors(v, u); + } + + /** + * Add vertex {@code v} to the adjacency list of {@code u} + * @param u vertex number + * @param v vertex number + */ + private void addToNeighbors(int u, int v) { + if (indexOf(v, neighbor[u]) >= 0) { + return; + } + degree[u]++; + if (neighbor[u] == null) { + neighbor[u] = new int[]{v}; + } + else { + neighbor[u] = Arrays.copyOf(neighbor[u], degree[u]); + neighbor[u][degree[u] - 1] = v; + } + + if (neighborSet[u] == null) { + neighborSet[u] = new XBitSet(n); + } + neighborSet[u].set(v); + } + + /** + * Returns the number of edges of this graph + * @return the number of edges + */ + public int numberOfEdges() { + int count = 0; + for (int i = 0; i < n; i++) { + count += degree[i]; + } + return count / 2; + } + + /** + * Inherit edges of the given graph into this graph, + * according to the conversion tables for vertex numbers. + * @param g graph + * @param conv vertex conversion table from the given graph to + * this graph: if {@code v} is a vertex of graph {@code g}, then + * {@code conv[v]} is the corresponding vertex in this graph; + * {@code conv[v] = -1} if {@code v} does not have a corresponding vertex + * in this graph + * @param inv vertex conversion table from this graph to + * the argument graph: if {@code v} is a vertex of this graph, + * then {@code inv[v]} is the corresponding vertex in graph {@code g}; + * it is assumed that {@code v} always have a corresponding vertex in + * graph g. + * + */ + public void inheritEdges(Graph g, int conv[], int inv[]) { + for (int v = 0; v < n; v++) { + int x = inv[v]; + for (int i = 0; i < g.degree[x]; i++) { + int y = g.neighbor[x][i]; + int u = conv[y]; + if (u >= 0) { + addEdge(u, v); + } + } + } + } + + /** + * Read a graph from the specified file in {@code dgf} format and + * return the resulting {@code Graph} object. + * @param path the path of the directory containing the file + * @param name the file name without the extension ".dgf" + * @return the resulting {@code Graph} object; null if the reading fails + */ + public static Graph readGraphDgf(String path, String name) { + File file = new File(path + "/" + name + ".dgf"); + return readGraphDgf(file); + } + + /** + * Read a graph from the specified file in {@code dgf} format and + * return the resulting {@code Graph} object. + * @param file file from which to read + * @return the resulting {@code Graph} object; null if the reading fails + */ + public static Graph readGraphDgf(File file) { + try { + BufferedReader br = new BufferedReader(new FileReader(file)); + String line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + if (line.startsWith("p")) { + String s[] = line.split(" "); + int n = Integer.parseInt(s[2]); + // m is twice the number of edges explicitly listed + int m = Integer.parseInt(s[3]); + Graph g = new Graph(n); + + for (int i = 0; i < m; i++) { + line = br.readLine(); + while (!line.startsWith("e")) { + line = br.readLine(); + } + s = line.split(" "); + int u = Integer.parseInt(s[1]) - 1; + int v = Integer.parseInt(s[2]) - 1; + g.addEdge(u, v); + } + return g; + } + else { + throw new RuntimeException("!!No problem descrioption"); + } + + } catch (FileNotFoundException e) { + e.printStackTrace(); + } catch (IOException e) { + e.printStackTrace(); + } + return null; + } + + /** + * Read a graph from the specified file in {@code col} format and + * return the resulting {@code Graph} object. + * @param path the path of the directory containing the file + * @param name the file name without the extension ".col" + * @return the resulting {@code Graph} object; null if the reading fails + */ + public static Graph readGraphCol(String path, String name) { + File file = new File(path + "/" + name + ".col"); + try { + BufferedReader br = new BufferedReader(new FileReader(file)); + String line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + if (line.startsWith("p")) { + String s[] = line.split(" "); + int n = Integer.parseInt(s[2]); + // m is twice the number of edges in this format + int m = Integer.parseInt(s[3]); + Graph g = new Graph(n); + + for (int i = 0; i < m; i++) { + line = br.readLine(); + while (line != null && !line.startsWith("e")) { + line = br.readLine(); + } + if (line == null) { + break; + } + s = line.split(" "); + int u = Integer.parseInt(s[1]); + int v = Integer.parseInt(s[2]); + g.addEdge(u - 1, v - 1); + } + return g; + } + else { + throw new RuntimeException("!!No problem descrioption"); + } + + } catch (FileNotFoundException e) { + e.printStackTrace(); + } catch (IOException e) { + e.printStackTrace(); + } + return null; + } + + /** + * Read a graph from the specified file in {@code gr} format and + * return the resulting {@code Graph} object. + * The vertex numbers 1~n in the gr file format are + * converted to 0~n-1 in the internal representation. + * @param file graph file in {@code gr} format + * @return the resulting {@code Graph} object; null if the reading fails + */ + public static Graph readGraph(String path, String name) { + File file = new File(path + "/" + name + ".gr"); + return readGraph(file); + } + + /** + * Read a graph from the specified file in {@code gr} format and + * return the resulting {@code Graph} object. + * The vertex numbers 1~n in the gr file format are + * converted to 0~n-1 in the internal representation. + * @param path the path of the directory containing the file + * @param name the file name without the extension ".gr" + * @return the resulting {@code Graph} object; null if the reading fails + */ + public static Graph readGraph(File file) { + try { + BufferedReader br = new BufferedReader(new FileReader(file)); + String line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + if (line.startsWith("p")) { + String s[] = line.split(" "); + if (!s[1].equals("tw")) { + throw new RuntimeException("!!Not treewidth instance"); + } + int n = Integer.parseInt(s[2]); + int m = Integer.parseInt(s[3]); + Graph g = new Graph(n); + + for (int i = 0; i < m; i++) { + line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + s = line.split(" "); + int u = Integer.parseInt(s[0]); + int v = Integer.parseInt(s[1]); + g.addEdge(u - 1, v - 1); + } + return g; + } + else { + throw new RuntimeException("!!No problem descrioption"); + } + + } catch (FileNotFoundException e) { + e.printStackTrace(); + } catch (IOException e) { + e.printStackTrace(); + } + return null; + } + + /** + * Read a graph from the specified input stream in {@code gr} format and + * return the resulting {@code Graph} object. + * The vertex numbers 1~n in the gr file format are + * converted to 0~n-1 in the internal representation. + * @param is the input stream representing the graph + * @return the resulting {@code Graph} object; null if the reading fails + */ + public static Graph readGraph(InputStream is) { + try { + BufferedReader br = new BufferedReader( + new InputStreamReader(is)); + String line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + if (line.startsWith("p")) { + String s[] = line.split(" "); + if (!s[1].equals("tw")) { + throw new RuntimeException("!!Not treewidth instance"); + } + int n = Integer.parseInt(s[2]); + int m = Integer.parseInt(s[3]); + Graph g = new Graph(n); + + for (int i = 0; i < m; i++) { + line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + s = line.split(" "); + int u = Integer.parseInt(s[0]); + int v = Integer.parseInt(s[1]); + g.addEdge(u - 1, v - 1); + } + + br.close(); + return g; + } + else { + throw new RuntimeException("!!No problem descrioption"); + } + + } catch (FileNotFoundException e) { + e.printStackTrace(); + } catch (IOException e) { + e.printStackTrace(); + } + return null; + } + + /** + * finds the first occurence of the + * given integer in the given int array + * @param x value to be searched + * @param a array + * @return the smallest {@code i} such that + * {@code a[i]} = {@code x}; + * -1 if no such {@code i} exists + */ + private static int indexOf(int x, int a[]) { + if (a == null) { + return -1; + } + for (int i = 0; i < a.length; i++) { + if (x == a[i]) { + return i; + } + } + return -1; + } + + /** + * returns true if two vetices are adjacent to each other + * in this targat graph + * @param u a vertex + * @param v another vertex + * @return {@code true} if {@code u} is adjcent to {@code v}; + * {@code false} otherwise + */ + public boolean areAdjacent(int u, int v) { + return indexOf(v, neighbor[u]) >= 0; + } + + /** + * returns the minimum degree, the smallest d such that + * there is some vertex {@code v} with {@code degree[v]} = d, + * of this target graph + * @return the minimum degree + */ + public int minDegree() { + if (n == 0) { + return 0; + } + int min = degree[0]; + for (int v = 0; v < n; v++) { + if (degree[v] < min) min = degree[v]; + } + return min; + } + + /** + * Computes the neighbor set for a given set of vertices + * @param set set of vertices + * @return an {@code XBitSet} reprenting the neighbor set of + * the given vertex set + */ + public XBitSet neighborSet(XBitSet set) { + XBitSet result = new XBitSet(n); + for (int v = set.nextSetBit(0); v >= 0; + v = set.nextSetBit(v + 1)) { + result.or(neighborSet[v]); + } + result.andNot(set); + return result; + } + + /** + * Computes the closed neighbor set for a given set of vertices + * @param set set of vertices + * @return an {@code XBitSet} reprenting the closed neighbor set of + * the given vertex set + */ + public XBitSet closedNeighborSet(XBitSet set) { + XBitSet result = (XBitSet) set.clone(); + for (int v = set.nextSetBit(0); v >= 0; + v = set.nextSetBit(v + 1)) { + result.or(neighborSet[v]); + } + return result; + } + + /** + * Compute connected components of this target graph after + * the removal of the vertices in the given separator, + * using Depth-First Search + * @param separator set of vertices to be removed + * @return the arrayList of connected components, + * the vertex set of each component represented by a {@code XBitSet} + */ + public ArrayList getComponentsDFS(XBitSet separator) { + ArrayList result = new ArrayList(); + mark = new int[n]; + for (int v = 0; v < n; v++) { + if (separator.get(v)) { + mark[v] = -1; + } + } + + nc = 0; + + for (int v = 0; v < n; v++) { + if (mark[v] == 0) { + nc++; + markFrom(v); + } + } + + for (int c = 1; c <= nc; c++) { + result.add(new XBitSet(n)); + } + + for (int v = 0; v < n; v++) { + int c = mark[v]; + if (c >= 1) { + result.get(c - 1).set(v); + } + } + return result; + } + + /** + * Recursive method for depth-first search + * vertices reachable from the given vertex, + * passing through only unmarked vertices (vertices + * with the mark[] value being 0 or -1), + * are marked by the value of {@code nc} which + * is a positive integer + * @param v vertex to be visited + */ + private void markFrom(int v) { + if (mark[v] != 0) return; + mark[v] = nc; + for (int i = 0; i < degree[v]; i++) { + int w = neighbor[v][i]; + markFrom(w); + } + } + + /** + * Compute connected components of this target graph after + * the removal of the vertices in the given separator, + * by means of iterated bit operations + * @param separator set of vertices to be removed + * @return the arrayList of connected components, + * the vertex set of each component represented by a {@code XBitSet} + */ + public ArrayList getComponents(XBitSet separator) { + ArrayList result = new ArrayList(); + XBitSet rest = all.subtract(separator); + for (int v = rest.nextSetBit(0); v >= 0; + v = rest.nextSetBit(v + 1)) { + XBitSet c = (XBitSet) neighborSet[v].clone(); + XBitSet toBeScanned = c.subtract(separator); + c.set(v); + while (!toBeScanned.isEmpty()) { + XBitSet save = (XBitSet) c.clone(); + for (int w = toBeScanned.nextSetBit(0); w >= 0; + w = toBeScanned.nextSetBit(w + 1)) { + c.or(neighborSet[w]); + } + toBeScanned = c.subtract(save); + toBeScanned.andNot(separator); + } + result.add(c.subtract(separator)); + rest.andNot(c); + } + + return result; + } + + /** + * Compute the full components associated with the given separator, + * by means of iterated bit operations + * @param separator set of vertices to be removed + * @return the arrayList of full components, + * the vertex set of each component represented by a {@code XBitSet} + */ + public ArrayList getFullComponents(XBitSet separator) { + ArrayList result = new ArrayList(); + XBitSet rest = all.subtract(separator); + for (int v = rest.nextSetBit(0); v >= 0; + v = rest.nextSetBit(v + 1)) { + XBitSet c = (XBitSet) neighborSet[v].clone(); + XBitSet toBeScanned = c.subtract(separator); + c.set(v); + while (!toBeScanned.isEmpty()) { + XBitSet save = (XBitSet) c.clone(); + for (int w = toBeScanned.nextSetBit(0); w >= 0; + w = toBeScanned.nextSetBit(w + 1)) { + c.or(neighborSet[w]); + } + toBeScanned = c.subtract(save); + toBeScanned.andNot(separator); + } + if (separator.isSubset(c)) { + result.add(c.subtract(separator)); + } + rest.andNot(c); + } + return result; + } + + /** + * Checks if the given induced subgraph of this target graph is connected. + * @param vertices the set of vertices inducing the subraph + * @return {@code true} if the subgrpah is connected; {@code false} otherwise + */ + + public boolean isConnected(XBitSet vertices) { + int v = vertices.nextSetBit(0); + if (v < 0) { + return true; + } + + XBitSet c = (XBitSet) neighborSet[v].clone(); + XBitSet toScan = c.intersectWith(vertices); + c.set(v); + while (!toScan.isEmpty()) { + XBitSet save = (XBitSet) c.clone(); + for (int w = toScan.nextSetBit(0); w >= 0; + w = toScan.nextSetBit(w + 1)) { + c.or(neighborSet[w]); + } + toScan = c.subtract(save); + toScan.and(vertices); + } + return vertices.isSubset(c); + } + + /** + * Checks if the given induced subgraph of this target graph is biconnected. + * @param vertices the set of vertices inducing the subraph + * @return {@code true} if the subgrpah is biconnected; {@code false} otherwise + */ + public boolean isBiconnected(BitSet vertices) { +// if (!isConnected(vertices)) { +// return false; +// } + dfCount = 1; + dfn = new int[n]; + low = new int[n]; + + for (int v = 0; v < n; v++) { + if (!vertices.get(v)) { + dfn[v] = -1; + } + } + + int s = vertices.nextSetBit(0); + dfn[s] = dfCount++; + low[s] = dfn[s]; + + boolean first = true; + for (int i = 0; i < degree[s]; i++) { + int v = neighbor[s][i]; + if (dfn[v] != 0) { + continue; + } + if (!first) { + return false; + } + boolean b = dfsForBiconnectedness(v); + if (!b) return false; + else { + first = false; + } + } + return true; + } + + /** + * Depth-first search for deciding biconnectivigy. + * @param v vertex to be visited + * @return {@code true} if articulation point is found + * in the search starting from {@cod v}, {@false} otherwise + */ + private boolean dfsForBiconnectedness(int v) { + dfn[v] = dfCount++; + low[v] = dfn[v]; + for (int i = 0; i < degree[v]; i++) { + int w = neighbor[v][i]; + if (dfn[w] > 0 && dfn[w] < low[v]) { + low[v] = dfn[w]; + } + else if (dfn[w] == 0) { + boolean b = dfsForBiconnectedness(w); + if (!b) { + return false; + } + if (low[w] >= dfn[v]) { + return false; + } + if (low[w] < low[v]) { + low[v] = low[w]; + } + } + } + return true; + } + + + /** + * Checks if the given induced subgraph of this target graph is triconnected. + * This implementation is naive and call isBiconnected n times, where n is + * the number of vertices + * @param vertices the set of vertices inducing the subraph + * @return {@code true} if the subgrpah is triconnected; {@code false} otherwise + */ + public boolean isTriconnected(BitSet vertices) { + if (!isBiconnected(vertices)) { + return false; + } + + BitSet work = (BitSet) vertices.clone(); + int prev = -1; + for (int v = vertices.nextSetBit(0); v >= 0; + v = vertices.nextSetBit(v + 1)) { + if (prev >= 0) { + work.set(prev); + } + prev = v; + work.clear(v); + if (!isBiconnected(work)) { + return false; + } + } + return true; + } + + /** + * Compute articulation vertices of the subgraph of this + * target graph induced by the given set of vertices + * Assumes this subgraph is connected; otherwise, only + * those articulation vertices in the first connected component + * are obtained. + * + * @param vertices the set of vertices of the subgraph + * @return the set of articulation vertices + */ + public XBitSet articulations(BitSet vertices) { + articulationSet = new XBitSet(n); + dfCount = 1; + dfn = new int[n]; + low = new int[n]; + + for (int v = 0; v < n; v++) { + if (!vertices.get(v)) { + dfn[v] = -1; + } + } + + depthFirst(vertices.nextSetBit(0)); + return articulationSet; + } + + /** + * Depth-first search for listing articulation vertices. + * The articulations found in the search are + * added to the {@code XBitSet articulationSet}. + * @param v vertex to be visited + */ + private void depthFirst(int v) { + dfn[v] = dfCount++; + low[v] = dfn[v]; + for (int i = 0; i < degree[v]; i++) { + int w = neighbor[v][i]; + if (dfn[w] > 0) { + low[v] = Math.min(low[v], dfn[w]); + } + else if (dfn[w] == 0) { + depthFirst(w); + if (low[w] >= dfn[v] && + (dfn[v] > 1 || !lastNeighborIndex(v, i))){ + articulationSet.set(v); + } + low[v] = Math.min(low[v], low[w]); + } + } + } + + /** + * Decides if the given index is the effectively + * last index of the neighbor array of the given vertex, + * ignoring vertices not in the current subgraph + * considered, which is known by their dfn being -1. + * @param v the vertex in question + * @param i the index in question + * @return {@code true} if {@code i} is effectively + * the last index of the neighbor array of vertex {@code v}; + * {@code false} otherwise. + */ + + private boolean lastNeighborIndex(int v, int i) { + for (int j = i + 1; j < degree[v]; j++) { + int w = neighbor[v][j]; + if (dfn[w] == 0) { + return false; + } + } + return true; + } + + /** + * fill the specified vertex set into a clique + * @param vertexSet vertex set to be filled + */ + public void fill(XBitSet vertexSet) { + for (int v = vertexSet.nextSetBit(0); v >= 0; + v = vertexSet.nextSetBit(v + 1)) { + XBitSet missing = vertexSet.subtract(neighborSet[v]); + for (int w = missing.nextSetBit(v + 1); w >= 0; + w = missing.nextSetBit(w + 1)) { + addEdge(v, w); + } + } + } + + /** + * fill the specified vertex set into a clique + * @param vertices int array listing the vertices in the set + */ + public void fill(int[] vertices) { + for (int i = 0; i < vertices.length; i++) { + for (int j = i + 1; j < vertices.length; j++) { + addEdge(vertices[i], vertices[j]); + } + } + } + + /** list all maximal cliques of this graph + * Naive implementation, should be replaced by a better one + * @return + */ + public ArrayList listMaximalCliques() { + ArrayList list = new ArrayList<>(); + XBitSet subg = new XBitSet(n); + XBitSet cand = new XBitSet(n); + XBitSet qlique = new XBitSet(n); + subg.set(0,n); + cand.set(0,n); + listMaximalCliques(subg, cand, qlique, list); + return list; + } + + /** + * Auxiliary recursive method for listing maximal cliques + * Adds to {@code list} all maximal cliques + * @param subg + * @param cand + * @param clique + * @param list + */ + private void listMaximalCliques(XBitSet subg, XBitSet cand, + XBitSet qlique, ArrayList list) { + if(subg.isEmpty()){ + list.add((XBitSet)qlique.clone()); + return; + } + int max = -1; + XBitSet u = new XBitSet(n); + for(int i=subg.nextSetBit(0);i>=0;i=subg.nextSetBit(i+1)){ + XBitSet tmp = new XBitSet(n); + tmp.set(i); + tmp = neighborSet(tmp); + tmp.and(cand); + if(tmp.cardinality() > max){ + max = tmp.cardinality(); + u = tmp; + } + } + XBitSet candu = (XBitSet) cand.clone(); + candu.andNot(u); + while(!candu.isEmpty()){ + int i = candu.nextSetBit(0); + XBitSet tmp = new XBitSet(n); + tmp.set(i); + qlique.set(i); + XBitSet subgq = (XBitSet) subg.clone(); + subgq.and(neighborSet(tmp)); + XBitSet candq = (XBitSet) cand.clone(); + candq.and(neighborSet(tmp)); + listMaximalCliques(subgq,candq,qlique,list); + cand.clear(i); + candu.clear(i); + qlique.clear(i); + } + } + + /** + * Saves this target graph in the file specified by a path string, + * in .gr format. + * A stack trace will be printed if the file is not available for writing + * @param path the path-string + */ + public void save(String path) { + File outFile = new File(path); + PrintStream ps; + try { + ps = new PrintStream(new FileOutputStream(outFile)); + writeTo(ps); + ps.close(); + } catch (FileNotFoundException e) { + e.printStackTrace(); + } + } + /** + * Write this target graph in .gr format to the given + * print stream. + * @param ps print stream + */ + public void writeTo(PrintStream ps) { + int m = 0; + for (int i = 0; i < n; i++) { + m += degree[i]; + } + m = m / 2; + ps.println("p tw " + n + " " + m); + for (int i = 0; i < n; i++) { + for (int j = 0; j < degree[i]; j++) { + int k = neighbor[i][j]; + if (i < k) { + ps.println((i + 1) + " " + (k + 1)); + } + } + } + } + + /** + * Create a copy of this target graph + * @return the copy of this graph + */ + public Graph copy() { + Graph tmp = new Graph(n); + for (int v = 0; v < n; v++) { + for (int j = 0; j < neighbor[v].length; j++) { + int w = neighbor[v][j]; + tmp.addEdge(v, w); + } + } + return tmp; + } + + /** + * Check consistency of this graph + * + */ + public void checkConsistency() throws RuntimeException { + for (int v = 0; v < n; v++) { + for (int w = 0; w < n; w++) { + if (v == w) continue; + if (indexOf(v, neighbor[w]) >= 0 && + indexOf(w, neighbor[v]) < 0) { + throw new RuntimeException("adjacency lists inconsistent " + v + ", " + w); + } + if (neighborSet[v].get(w) && + !neighborSet[w].get(v)) { + throw new RuntimeException("neighborSets inconsistent " + v + ", " + w); + } + } + } + } + /** + * Create a random graph with the given number of vertices and + * the given number of edges + * @param n the number of vertices + * @param m the number of edges + * @param seed the seed for the pseudo random number generation + * @return {@code Graph} instance constructed + */ + public static Graph randomGraph(int n, int m, int seed) { + Random random = new Random(seed); + Graph g = new Graph(n); + + int k = 0; + int j = 0; + int m0 = n * (n - 1) / 2; + for (int v = 0; v < n; v++) { + for (int w = v + 1; w < n; w++) { + int r = random.nextInt(m0 - j); + if (r < m - k) { + g.addEdge(v, w); + g.addEdge(w, v); + k++; + } + j++; + } + } + return g; + } + + public static void main(String args[]) { + // an example of the use of random graph generation + Graph g = randomGraph(80, 1000, 1); + g.save("instance/random/gnm_80_1000_1.gr"); + } +} diff --git a/solvers/TCS-Meiji/tw/exact/GreedyDecomposer.java b/solvers/TCS-Meiji/tw/exact/GreedyDecomposer.java new file mode 100644 index 0000000..3f609ed --- /dev/null +++ b/solvers/TCS-Meiji/tw/exact/GreedyDecomposer.java @@ -0,0 +1,223 @@ +/* + * Copyright (c) 2017, Hisao Tamaki + */ + +package tw.exact; + +import java.io.File; + +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.PrintStream; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.Random; +import java.util.Set; + +public class GreedyDecomposer { + +// static final boolean VERBOSE = true; + private static final boolean VERBOSE = false; + // private static boolean DEBUG = true; + static boolean DEBUG = false; + + Graph g; + + Bag whole; + + Mode mode; + + ArrayList frontier; + XBitSet remaining; + + Set unsafes; + Set safes; + SafeSeparator ss; + + Random random; + + public enum Mode { + fill, defect, degree, safeFirst +// fill, defect, degree + } + + public GreedyDecomposer(Bag whole) { + this(whole, Mode.fill); + } + + public GreedyDecomposer(Bag whole, Mode mode) { + this.whole = whole; + this.mode = mode; + + // need a copy as we fill edges + this.g = whole.graph.copy(); + if (mode == Mode.safeFirst) { + safes = new HashSet<>(); + unsafes = new HashSet<>(); + ss = new SafeSeparator(whole.graph); + } + } + + public void decompose() { + whole.initializeForDecomposition(); + frontier = new ArrayList<>(); + remaining = (XBitSet) g.all.clone(); + + while (!remaining.isEmpty()) { + int vmin = remaining.nextSetBit(0); + int minCost = costOf(vmin); + +// ArrayList minFillVertices = new ArrayList<>(); +// minFillVertices.add(vmin); + + for (int v = remaining.nextSetBit(vmin + 1); + v >= 0; v = remaining.nextSetBit(v + 1)) { + int cost = costOf(v); + if (cost < minCost) { + minCost = cost; + vmin = v; + } + } + + ArrayList joined = new ArrayList<>(); + + XBitSet toBeAClique = new XBitSet(g.n); + toBeAClique.set(vmin); + + for (Separator s: frontier) { + XBitSet vs = s.vertexSet; + if (vs.get(vmin)) { + joined.add(s); + toBeAClique.or(vs); + } + } + +// System.out.println(joined.size() + " joined"); + + if (joined.isEmpty()) { + toBeAClique.set(vmin); + } + else if (joined.size() == 1) { + Separator uniqueSeparator = joined.get(0); + if (g.neighborSet[vmin].intersectWith(remaining) + .isSubset(uniqueSeparator.vertexSet)) { + uniqueSeparator.removeVertex(vmin); + if (uniqueSeparator.vertexSet.isEmpty()) { + whole.separators.remove(uniqueSeparator); + for (Bag b: uniqueSeparator.incidentBags) { + b.incidentSeparators.remove(uniqueSeparator); + } + frontier.remove(uniqueSeparator); + } + remaining.clear(vmin); + if (VERBOSE) { + System.out.println("cleared " + vmin + " from" + + uniqueSeparator); + } + continue; + } + } + + toBeAClique.or(g.neighborSet[vmin].intersectWith(remaining)); + + Bag bag = whole.addNestedBag(toBeAClique); + + if (VERBOSE) { + System.out.println("added bag with " + vmin + ", " + bag); + } + + g.fill(toBeAClique); + + XBitSet sep = toBeAClique.subtract( + new XBitSet(new int[]{vmin})); + + if (!sep.isEmpty()) { + Separator separator = + whole.addSeparator(sep); + + if (VERBOSE) { + System.out.println("added separator " + separator + + " with " + vmin + " absorbed"); + } + + separator.addIncidentBag(bag); + bag.addIncidentSeparator(separator); + + frontier.add(separator); + } + + if (VERBOSE) { + System.out.println("adding incidences to bag: " + bag); + } + + for (Separator s: joined) { + assert !s.vertexSet.isEmpty(); + s.addIncidentBag(bag); + bag.addIncidentSeparator(s); + if (VERBOSE) { + System.out.println(" " + s); + } + frontier.remove(s); + } + + remaining.clear(vmin); + } + + whole.setWidth(); + } + + int costOf(int v) { + switch (mode) { + case fill: return countFill(v); + case defect: return defectCount(v); + case degree: return degreeOf(v); + case safeFirst: { + XBitSet ns = g.neighborSet[v]; + ns.set(v); + if (safes.contains(ns)) { + return countFill(v); + } + else if (unsafes.contains(ns)) { + return g.n * g.n + countFill(v); + } + else if (ss.isSafeSeparator(ns)) { + safes.add(ns); + return countFill(v); + } + else { + unsafes.add(ns); + return g.n * g.n + countFill(v); + } + } + default: return 0; + } + } + + int defectCount(int v) { + int count = 0; + + XBitSet ns = g.neighborSet[v].intersectWith(remaining); + for (int w = ns.nextSetBit(0); w >= 0; + w = ns.nextSetBit(w + 1)) { + if (ns.subtract(g.neighborSet[w]).cardinality() > 1) { + count++; + } + } + return count; + } + + int countFill(int v) { + int count = 0; + XBitSet ns = g.neighborSet[v].intersectWith(remaining); + for (int w = ns.nextSetBit(0); w >= 0; + w = ns.nextSetBit(w + 1)) { + count += ns.subtract(g.neighborSet[w]).cardinality() - 1; + } + return count / 2; + } + + int degreeOf(int v) { + XBitSet ns = g.neighborSet[v].intersectWith(remaining); + return ns.cardinality(); + } +} diff --git a/solvers/TCS-Meiji/tw/exact/IODecomposer.java b/solvers/TCS-Meiji/tw/exact/IODecomposer.java new file mode 100644 index 0000000..ceae3ef --- /dev/null +++ b/solvers/TCS-Meiji/tw/exact/IODecomposer.java @@ -0,0 +1,718 @@ +/* + * Copyright (c) 2017, Hisao Tamaki + */ + +package tw.exact; + +import java.io.File; + +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.PrintStream; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.HashSet; +import java.util.LinkedList; +import java.util.Map; +import java.util.Queue; +import java.util.Set; + +public class IODecomposer { + +// static final boolean VERBOSE = true; + private static final boolean VERBOSE = false; +// private static boolean DEBUG = true; + static boolean DEBUG = false; + + Graph g; + + Bag currentBag; + + LayeredSieve oBlockSieve; + + Queue readyQueue; + + ArrayList pendingEndorsers; + +// Set processed; + + Map oBlockCache; + + Map blockCache; + + Map iBlockCache; + + Set pmcCache; + + int upperBound; + int lowerBound; + + int targetWidth; + + PMC solution; + + SafeSeparator ss; + + static int TIMEOUT_CHECK = 100; + + public IODecomposer(Bag bag, + int lowerBound, int upperBound) { + + currentBag = bag; + g = bag.graph; + if (!g.isConnected(g.all)) { + System.err.println("graph must be connected, size = " + bag.size); + } + this.lowerBound = lowerBound; + this.upperBound = upperBound; + + ss = new SafeSeparator(g); + } + + public void decompose() { + blockCache = new HashMap<>(); + iBlockCache = new HashMap<>(); + + pendingEndorsers = new ArrayList<>(); + pmcCache = new HashSet<>(); + + + while (targetWidth <= upperBound) { + if (VERBOSE) { + System.out.println("deompose loop, n = " + currentBag.size + + ", targetWidth = " + targetWidth); + } + + + if (currentBag.size <= targetWidth + 1) { + currentBag.nestedBags = null; + currentBag.separators = null; + return; + } + + // endorserMap = new HashMap<>(); + + oBlockSieve = new LayeredSieve(g.n, targetWidth); + oBlockCache = new HashMap<>(); + + readyQueue = new LinkedList<>(); + + readyQueue.addAll(iBlockCache.values()); + + for (int v = 0; v < g.n; v++) { + XBitSet cnb = (XBitSet) g.neighborSet[v].clone(); + cnb.set(v); + + if (DEBUG) { + System.out.println(v + ":" + cnb.cardinality() + ", " + cnb); + } + + if (cnb.cardinality() > targetWidth + 1) { + continue; + } + + // if (!pmcCache.contains(cnb)) { + PMC pmc = new PMC(cnb, getBlocks(cnb)); + if (pmc.isValid) { + // pmcCache.add(cnb); + if (pmc.isReady()) { + pmc.endorse(); + } + else { + pendingEndorsers.add(pmc); + } + // } + } + } + + while (true) { + while (!readyQueue.isEmpty()) { + + IBlock ready = readyQueue.remove(); + + ready.process(); + + if (solution != null) { + log("solution found"); + Bag bag = currentBag.addNestedBag(solution.vertexSet); + solution.carryOutDecomposition(bag); + return; + } + } + + if (!pendingEndorsers.isEmpty()) { + log("queue empty"); + } + + ArrayList endorsers = pendingEndorsers; + pendingEndorsers = new ArrayList(); + for (PMC endorser : endorsers) { + endorser.process(); + if (solution != null) { + log("solution found"); + Bag bag = currentBag.addNestedBag(solution.vertexSet); + solution.carryOutDecomposition(bag); + return; + } + } + if (readyQueue.isEmpty()) { + break; + } + } + + log("failed"); + + targetWidth++; + } + return; + } + + boolean crossesOrSubsumes(XBitSet separator1, XBitSet endorsed, XBitSet separator2) { + ArrayList components = g.getComponents(separator1); + for (XBitSet compo: components) { + if (endorsed.isSubset(compo)) { + // subsumes + return true; + } + } + // test crossing + XBitSet diff = separator2.subtract(separator1); + for (XBitSet compo: components) { + if (diff.isSubset(compo)) { + return false; + } + } + return true; + } + + Block getBlock(XBitSet component) { + Block block = blockCache.get(component); + if (block == null) { + block = new Block(component); + blockCache.put(component, block); + } + return block; + } + + void makeIBlock(XBitSet component, PMC endorser) { + IBlock iBlock = iBlockCache.get(component); + if (iBlock == null) { + Block block = getBlock(component); + iBlock = new IBlock(block, endorser); + blockCache.put(component, block); + } + } + + IBlock getIBlock(XBitSet component) { + return iBlockCache.get(component); + } + + boolean isFullComponent(XBitSet component, XBitSet sep) { + for (int v = sep.nextSetBit(0); v >= 0; v = sep.nextSetBit(v + 1)) { + if (component.isDisjoint(g.neighborSet[v])) { + return false; + } + } + return true; + } + + ArrayList getBlocks(XBitSet separator) { + ArrayList result = new ArrayList(); + XBitSet rest = g.all.subtract(separator); + for (int v = rest.nextSetBit(0); v >= 0; v = rest.nextSetBit(v + 1)) { + XBitSet c = g.neighborSet[v].subtract(separator); + XBitSet toBeScanned = (XBitSet) c.clone(); + c.set(v); + while (!toBeScanned.isEmpty()) { + XBitSet save = (XBitSet) c.clone(); + for (int w = toBeScanned.nextSetBit(0); w >= 0; w = toBeScanned + .nextSetBit(w + 1)) { + c.or(g.neighborSet[w]); + } + c.andNot(separator); + toBeScanned = c.subtract(save); + } + + Block block = getBlock(c); + result.add(block); + rest.andNot(c); + } + return result; + } + + class Block implements Comparable { + XBitSet component; + XBitSet separator; + XBitSet outbound; + + Block(XBitSet component) { + this.component = component; + this.separator = g.neighborSet(component); + + XBitSet rest = g.all.subtract(component); + rest.andNot(separator); + + int minCompo = component.nextSetBit(0); + + // the scanning order ensures that the first full component + // encountered is the outbound one + for (int v = rest.nextSetBit(0); v >= 0; v = rest.nextSetBit(v + 1)) { + XBitSet c = (XBitSet) g.neighborSet[v].clone(); + XBitSet toBeScanned = c.subtract(separator); + c.set(v); + while (!toBeScanned.isEmpty()) { + XBitSet save = (XBitSet) c.clone(); + for (int w = toBeScanned.nextSetBit(0); w >= 0; + w = toBeScanned.nextSetBit(w + 1)) { + c.or(g.neighborSet[w]); + } + toBeScanned = c.subtract(save).subtract(separator); + } + if (separator.isSubset(c)) { + // full block other than "component" found + if (v < minCompo) { + outbound = c.subtract(separator); + } + else { + // v > minCompo + outbound = component; + } + return; + } + rest.andNot(c); + } + } + + boolean isOutbound() { + return outbound == component; + } + + boolean ofMinimalSeparator() { + return outbound != null; + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + if (outbound == component) { + sb.append("o"); + } + else { + if (iBlockCache.get(component) != null) { + sb.append("f"); + } else { + sb.append("i"); + } + } + sb.append(component + "(" + separator + ")"); + return sb.toString(); + } + + @Override + public int compareTo(Block b) { + return component.nextSetBit(0) - b.component.nextSetBit(0); + } + } + + class IBlock { + Block block; + PMC endorser; + + IBlock(Block block, PMC endorser) { + this.block = block; + this.endorser = endorser; + + if (DEBUG) { + System.out.println("IBlock constructor" + this); + } + + } + + void process() { + if (DEBUG) { + System.out.print("processing " + this); + } + + makeSimpleTBlock(); + + ArrayList oBlockSeparators = new ArrayList<>(); + oBlockSieve.collectSuperblocks( + block.component, block.separator, oBlockSeparators); + + for (XBitSet tsep : oBlockSeparators) { + Oblock oBlock = oBlockCache.get(tsep); + oBlock.plugin(this); + } + } + + void makeSimpleTBlock() { + + if (DEBUG) { + System.out.print("makeSimple: " + this); + } + + Oblock oBlock = oBlockCache.get(block.separator); + if (oBlock == null) { + oBlock = new Oblock(block.separator, block.outbound); + oBlockCache.put(block.separator, oBlock); + oBlockSieve.put(block.outbound, block.separator); + oBlock.crown(); + } + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("IBlock:" + block.separator + "\n"); + sb.append(" in :" + block.component + "\n"); + sb.append(" out :" + block.outbound + "\n"); + return sb.toString(); + } + } + + class Oblock { + XBitSet separator; + XBitSet openComponent; + + Oblock(XBitSet separator, XBitSet openComponent) { + this.separator = separator; + this.openComponent = openComponent; + } + + void plugin(IBlock iBlock) { + if (DEBUG) { + System.out.println("plugin " + iBlock); + System.out.println(" to " + this); + } + + XBitSet newsep = separator.unionWith(iBlock.block.separator); + + if (newsep.cardinality() > targetWidth + 1) { + return; + } + + ArrayList blockList = getBlocks(newsep); + + Block fullBlock = null; + int nSep = newsep.cardinality(); + + for (Block block : blockList) { + if (block.separator.cardinality() == nSep) { + if (fullBlock != null) { +// minimal separator: treated elsewhere + return; + } + fullBlock = block; + } + } + + if (fullBlock == null) { +// if (!pmcCache.contains(newsep)) { + PMC pmc = new PMC(newsep, blockList); + if (pmc.isValid) { +// pmcCache.add(newsep); + if (pmc.isReady()) { + pmc.endorse(); + } + else { + pendingEndorsers.add(pmc); + } +// } + } + } + + else { + if (newsep.cardinality() > targetWidth) { + return; + } + Oblock oBlock = oBlockCache.get(newsep); + if (oBlock == null) { + oBlock = new Oblock(newsep, fullBlock.component); + oBlockCache.put(newsep, oBlock); + oBlockSieve.put(fullBlock.component, newsep); + oBlock.crown(); + } + } + } + + void crown() { + for (int v = separator.nextSetBit(0); v >= 0; + v = separator.nextSetBit(v + 1)) { + if (DEBUG) { + System.out.println("try crowing by " + v); + } + + XBitSet newsep = separator.unionWith( + g.neighborSet[v].intersectWith(openComponent)); + if (newsep.cardinality() <= targetWidth + 1) { + + if (DEBUG) { + System.out.println("crowing by " + v + ":" + this); + } +// if (!pmcCache.contains(newsep)) { + PMC pmc = new PMC(newsep); + if (pmc.isValid) { +// pmcCache.add(newsep); + if (pmc.isReady()) { + pmc.endorse(); + } + else { + pendingEndorsers.add(pmc); + } +// } + } + } + } + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("TBlock:\n"); + sb.append(" sep :" + separator + "\n"); + sb.append(" open:" + openComponent + "\n"); + return sb.toString(); + } + } + + class PMC { + XBitSet vertexSet; + Block inbounds[]; + Block outbound; + boolean isValid; + + PMC(XBitSet vertexSet) { + this(vertexSet, getBlocks(vertexSet)); + } + + PMC(XBitSet vertexSet, ArrayList blockList) { + this.vertexSet = vertexSet; + if (vertexSet.isEmpty()) { + return; + } + for (Block block: blockList) { + if (block.isOutbound() && + (outbound == null || + outbound.separator.isSubset(block.separator))){ + outbound = block; + } + } + if (outbound == null) { + inbounds = blockList.toArray( + new Block[blockList.size()]); + } + else { + inbounds = new Block[blockList.size()]; + int k = 0; + for (Block block: blockList) { + if (!block.separator.isSubset(outbound.separator)) { + inbounds[k++] = block; + } + } + if (k < inbounds.length) { + inbounds = Arrays.copyOf(inbounds, k); + } + } + checkValidity(); + + if (DEBUG +// || +// vertexSet.equals( +// new XBitSet(new int[]{0, 1, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 38, 39, 40, 41, 42, 43, 44, 45, 55, 56, 57, 58, 59, 60, 61, 66, 69})) + ) { + System.out.println("PMC created:"); + System.out.println(this); + } + } + + void checkValidity() { + for (Block b: inbounds) { + if (!b.ofMinimalSeparator()) { + isValid = false; + return; + } + } + + for (int v = vertexSet.nextSetBit(0); v >= 0; + v = vertexSet.nextSetBit(v + 1)) { + XBitSet rest = vertexSet.subtract(g.neighborSet[v]); + rest.clear(v); + if (outbound != null && outbound.separator.get(v)) { + rest.andNot(outbound.separator); + } + for (Block b : inbounds) { + if (b.separator.get(v)) { + rest.andNot(b.separator); + } + } + if (!rest.isEmpty()) { + isValid = false; + return; + } + } + isValid = true; + } + + boolean isReady() { + for (int i = 0; i < inbounds.length; i++) { + if (iBlockCache.get(inbounds[i].component) == null) { + return false; + } + } + return true; + } + + XBitSet getTarget() { + if (outbound == null) { + return null; + } + XBitSet combined = vertexSet.subtract(outbound.separator); + for (Block b: inbounds) { + combined.or(b.component); + } + return combined; + } + + + void process() { + if (DEBUG) { + System.out.print("processing " + this); + } + if (isReady()) { + if (DEBUG) { + System.out.print("endorsing " + this); + } + endorse(); + } + else { + pendingEndorsers.add(this); + } + } + + void endorse() { + + if (DEBUG) { + System.out.print("endorsing " + this); + } + + if (DEBUG) { + System.out.println("ontbound= " + outbound); + } + + if (outbound == null) { + if (DEBUG) { + System.out.println("solution found in endorse()"); + } + solution = this; + return; + } + else { + endorse(getTarget()); + } + } + + void endorse(XBitSet target) { + if (DEBUG) { + System.out.println("endorsed = " + target); + } + + // if (separator.equals(bs1)) { + // System.err.println("endorsed = " + endorsed + + // ", " + endorserMap.get(endorsed)); + // } + // + + if (iBlockCache.get(target) == null) { + Block block = getBlock(target); + IBlock iBlock = new IBlock(block, this); + iBlockCache.put(target, iBlock); + + if (DEBUG) { + System.out.println("adding to ready queue" + iBlock); + } + readyQueue.add(iBlock); + } + } + + void carryOutDecomposition(Bag bag) { + if (DEBUG) { + System.out.print("carryOutDecomposition:" + this); + } + + for (Block inbound: inbounds) { + if (DEBUG) { + System.out.println("inbound = " + inbound); + } + IBlock iBlock = iBlockCache.get(inbound.component); + if (iBlock == null) { + System.out.println("inbound iBlock is null, block = " + inbound); + continue; + } + + Bag subBag = currentBag.addNestedBag( + iBlock.endorser.vertexSet); + Separator separator = + currentBag.addSeparator(inbound.separator); + + separator.incidentBags.add(bag); + separator.incidentBags.add(subBag); + + bag.incidentSeparators.add(separator); + subBag.incidentSeparators.add(separator); + iBlock.endorser.carryOutDecomposition(subBag); + } + } + + private XBitSet inletsInduced() { + XBitSet result = new XBitSet(g.n); + for (Block b : inbounds) { + result.or(b.separator); + } + return result; + } + + public String toString() { + + StringBuilder sb = new StringBuilder(); + sb.append("PMC"); + if (isValid) { + sb.append("(valid):\n"); + } + else { + sb.append("(invalid):\n"); + } + sb.append(" sep : " + vertexSet + "\n"); + sb.append(" outbound: " + outbound + "\n"); + + for (Block b : inbounds) { + sb.append(" inbound : " + b + "\n"); + } + return sb.toString(); + } + } + + int numberOfEnabledBlocks() { + return iBlockCache.size(); + } + + void dumpPendings() { + System.out.println("pending endorsers\n"); + for (PMC endorser : pendingEndorsers) { + System.out.print(endorser); + } + } + + void log(String logHeader) { + if (VERBOSE) { + + int sizes[] = oBlockSieve.getSizes(); + + System.out.println(logHeader); + System.out.print("n = " + g.n + " width = " + targetWidth + ", oBlocks = " + + oBlockCache.size() + Arrays.toString(sizes)); + System.out.print(", endorsed = " + iBlockCache.size()); + System.out.print(", pendings = " + pendingEndorsers.size()); + System.out.println(", blocks = " + blockCache.size()); + } + } +} diff --git a/solvers/TCS-Meiji/tw/exact/LayeredSieve.java b/solvers/TCS-Meiji/tw/exact/LayeredSieve.java new file mode 100644 index 0000000..a95f979 --- /dev/null +++ b/solvers/TCS-Meiji/tw/exact/LayeredSieve.java @@ -0,0 +1,53 @@ +/* + * Copyright (c) 2017, Hisao Tamaki + */ + +package tw.exact; + +import java.util.ArrayList; + +public class LayeredSieve { + int n; + int targetWidth; + BlockSieve sieves[]; + + public LayeredSieve(int n, int targetWidth) { + this.n = n; + this.targetWidth = targetWidth; + + int k = 33 - Integer.numberOfLeadingZeros(targetWidth); + sieves = new BlockSieve[k]; + for (int i = 0; i < k; i++) { + int margin = (1 << i) - 1; + sieves[i] = new BlockSieve(n, targetWidth, margin); + } + } + + public void put(XBitSet vertices, XBitSet neighbors) { + int ns = neighbors.cardinality(); + int margin = targetWidth + 1 - ns; + int i = 32 - Integer.numberOfLeadingZeros(margin); + sieves[i].put(vertices, neighbors); + } + + public void put(XBitSet vertices, int neighborSize, XBitSet value) { + int margin = targetWidth + 1 - neighborSize; + int i = 32 - Integer.numberOfLeadingZeros(margin); + sieves[i].put(vertices, value); + } + + public void collectSuperblocks(XBitSet component, XBitSet neighbors, + ArrayList list) { + for (BlockSieve sieve: sieves) { + sieve.collectSuperblocks(component, neighbors, list); + } + } + + public int[] getSizes() { + int sizes[] = new int[sieves.length]; + for (int i = 0; i < sieves.length; i++) { + sizes[i] = sieves[i].size(); + } + return sizes; + } +} diff --git a/solvers/TCS-Meiji/tw/exact/MainDecomposer.java b/solvers/TCS-Meiji/tw/exact/MainDecomposer.java new file mode 100644 index 0000000..35859d4 --- /dev/null +++ b/solvers/TCS-Meiji/tw/exact/MainDecomposer.java @@ -0,0 +1,170 @@ +/* + * Copyright (c) 2017, Hisao Tamaki + */ + +package tw.exact; + +import java.io.File; + + + +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.PrintStream; +import java.lang.management.GarbageCollectorMXBean; +import java.lang.management.ManagementFactory; +import java.lang.management.ThreadMXBean; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.Random; + +public class MainDecomposer { + private static boolean VERBOSE = false; +// private static boolean VERBOSE = true; + private static boolean DEBUG = false; +//private static boolean debug = true; + + private static long time0; + + public static TreeDecomposition decompose(Graph g) { + log("decompose n = " + g.n); + if (g.n == 0) { + TreeDecomposition td = new TreeDecomposition(0, -1, g); + return td; + } + + ArrayList components = g.getComponents(new XBitSet()); + + int nc = components.size(); + if (nc == 1) { + return decomposeConnected(g); + } + + int invs[][] = new int[nc][]; + Graph graphs[] = new Graph[nc]; + + for (int i = 0; i < nc; i++) { + XBitSet compo = components.get(i); + int nv = compo.cardinality(); + graphs[i] = new Graph(nv); + invs[i] = new int[nv]; + int conv[] = new int[g.n]; + int k = 0; + for (int v = 0; v < g.n; v++) { + if (compo.get(v)) { + conv[v] = k; + invs[i][k] = v; + k++; + } + else { + conv[v] = -1; + } + } + graphs[i].inheritEdges(g, conv, invs[i]); + } + + TreeDecomposition td = new TreeDecomposition(0, 0, g); + + for (int i = 0; i < nc; i++) { + TreeDecomposition td1 = decomposeConnected(graphs[i]); + if (td1 == null) { + return null; + } + td.combineWith(td1, invs[i], null); + } + return td; + } + + public static TreeDecomposition decomposeConnected(Graph g) { + log("decomposeConnected: n = " + g.n); + + if (g.n <= 2) { + TreeDecomposition td = new TreeDecomposition(0, g.n - 1, g); + td.addBag(g.all.toArray()); + return td; + } + + Bag best = null; + + GreedyDecomposer.Mode[] modes = + new GreedyDecomposer.Mode[]{ + GreedyDecomposer.Mode.fill, + GreedyDecomposer.Mode.defect, + GreedyDecomposer.Mode.degree + }; + + for (GreedyDecomposer.Mode mode: modes) { + Bag whole = new Bag(g); + + GreedyDecomposer mfd = new GreedyDecomposer(whole, mode); +// GreedyDecomposer mfd = new GreedyDecomposer(whole); + + mfd.decompose(); + + log("greedy decomposition (" + mode + ") obtained with " + + whole.nestedBags.size() + " bags and width " + + whole.width); + + whole.detectSafeSeparators(); + + log(whole.countSafeSeparators() + " safe separators found "); + + whole.validate(); + + whole.pack(); + + whole.validate(); + + log("the decomposition packed into " + + whole.nestedBags.size() + " bags, separatorWidth = " + + whole.separatorWidth + ", max bag size = " + + whole.maxNestedBagSize()); + + if (best == null || + whole.maxNestedBagSize() < best.maxNestedBagSize()) { + best = whole; + } + } +// best = whole; + + // whole.dump(); + + int lowestPossible = best.separatorWidth; + + for (Bag bag: best.nestedBags) { + if (bag.getWidth() > lowestPossible) { + bag.makeRefinable(); + IODecomposer mtd = new IODecomposer(bag, g.minDegree(), g.n - 1); + mtd.decompose(); + int w = bag.getWidth(); + if (w > lowestPossible) { + lowestPossible = w; + } + } + } + + log("flattening"); + + best.flatten(); + + log("the decomposition flattened into " + + best.nestedBags.size() + " bags"); + +// whole.dump(); + + return best.toTreeDecomposition(); + } + + static void log(String message) { + if (VERBOSE) { + System.out.println(message); + } + } + + public static void main(String[] args) { + Graph g = Graph.readGraph(System.in); + TreeDecomposition td = decompose(g); + td.writeTo(System.out); + } +} diff --git a/solvers/TCS-Meiji/tw/exact/SafeSeparator.java b/solvers/TCS-Meiji/tw/exact/SafeSeparator.java new file mode 100644 index 0000000..4ae9e0b --- /dev/null +++ b/solvers/TCS-Meiji/tw/exact/SafeSeparator.java @@ -0,0 +1,570 @@ +/* + * Copyright (c) 2017, Hisao Tamaki + */ + +package tw.exact; + +import java.io.File; + +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.PrintStream; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; + +public class SafeSeparator { + private static int MAX_MISSINGS = 100; + private static int DEFAULT_MAX_STEPS = 1000000; + private static final boolean CONFIRM_MINOR = true; +// private static final boolean CONFIRM_MINOR = false; +// private static final boolean DEBUG = true; + private static final boolean DEBUG = false; + + Graph g; + + int maxSteps; + int steps; + LeftNode[] leftNodes; + ArrayList rightNodeList; + ArrayList missingEdgeList; + XBitSet available; + + public SafeSeparator (Graph g) { + this.g = g; + } + + public boolean isSafeSeparator(XBitSet separator) { + return isSafeSeparator(separator, DEFAULT_MAX_STEPS); + } + + public boolean isSafeSeparator(XBitSet separator, int maxSteps) { + // System.out.println("isSafeSeparator " + separator); + this.maxSteps = maxSteps; + steps = 0; + ArrayList components = g.getComponents(separator); + if (components.size() == 1) { +// System.err.println("non separator for safety testing:" + separator); +// throw new RuntimeException("non separator for safety testing:" + separator); + return false; + } + if (countMissings(separator) > MAX_MISSINGS) { + return false; + } + for (XBitSet compo: components) { + XBitSet sep = g.neighborSet(compo); + XBitSet rest = g.all.subtract(sep).subtract(compo); + XBitSet[] contracts = findCliqueMinor(sep, rest); + if (contracts == null) { + return false; + } + if (CONFIRM_MINOR) { + confirmCliqueMinor(sep, rest, contracts); + } + } + return true; + } + + private class LeftNode { + int index; + int vertex; + // ArrayList rightNeighborList; + // XBitSet rightNeighborSet; + + LeftNode(int index, int vertex) { + this.index = index; + this.vertex = vertex; + // rightNeighborList = new ArrayList<>(); + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("left" + index + "(" + vertex + "):"); + sb.append(", " + g.neighborSet[vertex]); + return sb.toString(); + } + } + + private class RightNode { + int index; + XBitSet vertexSet; + XBitSet neighborSet; + LeftNode assignedTo; + boolean printed; + + RightNode(int vertex) { + vertexSet = new XBitSet(g.n); + vertexSet.set(vertex); + neighborSet = g.neighborSet(vertexSet); + } + + RightNode(XBitSet vertexSet) { + this.vertexSet = vertexSet; + neighborSet = g.neighborSet(vertexSet); + } + + boolean potentiallyCovers(MissingEdge me) { + return + assignedTo == null && + neighborSet.get(me.left1.vertex) && + neighborSet.get(me.left2.vertex); + } + + boolean finallyCovers(MissingEdge me) { + return + assignedTo == me.left1 && + neighborSet.get(me.left2.vertex) || + assignedTo == me.left2 && + neighborSet.get(me.left1.vertex); + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("right" + index + ":" + vertexSet); + if (!printed) { + sb.append(", " + neighborSet); + } + if (assignedTo != null) { + sb.append("-> l" + assignedTo.index); + } + sb.append(", coveres {"); + for (MissingEdge me: missingEdgeList) { + if (this.potentiallyCovers(me)) { + sb.append("me" + me.index + " "); + } + } + printed = true; + sb.append("}"); + + return sb.toString(); + } + + } + + private class MissingEdge { + int index; + LeftNode left1; + LeftNode left2; + boolean unAugmentable; + + MissingEdge(LeftNode left1, LeftNode left2) { + this.left1 = left1; + this.left2 = left2; + } + + RightNode[] findCoveringPair() { + for (RightNode rn1: rightNodeList) { + if (rn1.neighborSet.get(left1.vertex) && + !rn1.neighborSet.get(left2.vertex)) { + for (RightNode rn2: rightNodeList) { + if (!rn2.neighborSet.get(left1.vertex) && + rn2.neighborSet.get(left2.vertex) && + connectable(rn1.vertexSet, rn2.vertexSet)) { + return new RightNode[]{rn1, rn2}; + } + } + } + } + return null; + } + + boolean isFinallyCovered() { + for (RightNode rn: rightNodeList) { + if (rn.finallyCovers(this)) { + return true; + } + } + return false; + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("missing(" + left1.index + "," + + left2.index + "), covered by {"); + for (RightNode rn: rightNodeList) { + if (rn.potentiallyCovers(this)) { + sb.append("r" + rn.index + " "); + } + } + sb.append("}"); + return sb.toString(); + } + } + private XBitSet[] findCliqueMinor(XBitSet separator, XBitSet rest) { + int k = separator.cardinality(); + available = (XBitSet) rest.clone(); + leftNodes = new LeftNode[k]; + { + int i = 0; + for (int v = separator.nextSetBit(0); v >= 0; + v = separator.nextSetBit(v + 1)) { + leftNodes[i] = new LeftNode(i, v); + i++; + } + } + + missingEdgeList = new ArrayList<>(); + { + int i = 0; + for (int v = separator.nextSetBit(0); v >= 0; + v = separator.nextSetBit(v + 1)) { + int j = i + 1; + for (int w = separator.nextSetBit(v + 1); w >= 0; + w = separator.nextSetBit(w + 1)) { + if (!g.neighborSet[v].get(w)) { + missingEdgeList.add(new MissingEdge(leftNodes[i], leftNodes[j])); + } + j++; + } + i++; + } + } + + int m = missingEdgeList.size(); + + XBitSet[] result = new XBitSet[k]; + for (int i = 0; i < k; i++) { + result[i] = new XBitSet(g.n); + result[i].set(leftNodes[i].vertex); + } + + if (m == 0) { + return result; + } + +// System.out.println(m + " missings for separator size " + k + +// " and total components size " + rest.cardinality()); + for (int i = 0; i < m; i++) { + missingEdgeList.get(i).index = i; + } + + rightNodeList = new ArrayList<>(); + XBitSet ns = g.neighborSet(separator); + ns.and(rest); + + for (int v = ns.nextSetBit(0); v >= 0; + v = ns.nextSetBit(v + 1)) { + if (g.neighborSet[v].cardinality() == 1) { + continue; + } + boolean useless = true; + for (MissingEdge me: missingEdgeList) { + if (g.neighborSet[v].get(me.left1.vertex) || + g.neighborSet[v].get(me.left2.vertex)) { + useless = false; + } + } + if (useless) { + continue; + } + RightNode rn = new RightNode(v); + rightNodeList.add(rn); + available.clear(v); + } + + while (true) { + steps++; + if (steps > maxSteps) { + return null; + } + MissingEdge zc = zeroCovered(); + if (zc == null) { + break; + } + RightNode[] coveringPair = zc.findCoveringPair(); + if (coveringPair != null) { + mergeRightNodes(coveringPair); + } + else { + return null; + } + } + + boolean moving = true; + while (rightNodeList.size() > k/2 && moving) { + steps++; + if (steps > maxSteps) { + return null; + } + moving = false; + MissingEdge lc = leastCovered(); + if (lc == null) { + break; + } + RightNode[] coveringPair = lc.findCoveringPair(); + if (coveringPair != null) { + mergeRightNodes(coveringPair); + moving = true; + } + else { + lc.unAugmentable = true; + } + } + + ArrayList temp = rightNodeList; + rightNodeList = new ArrayList<>(); + + for (RightNode rn: temp) { + boolean covers = false; + for (MissingEdge me: missingEdgeList) { + if (rn.potentiallyCovers(me)) { + covers = true; + break; + } + } + if (covers) { + rightNodeList.add(rn); + } + } + + int nRight = rightNodeList.size(); + for (int i = 0; i < nRight; i++) { + rightNodeList.get(i).index = i; + } + + if (DEBUG) { + System.out.println(k + " lefts"); + for (LeftNode ln: leftNodes) { + System.out.println(ln); + } + System.out.println(nRight + " rights"); + for (RightNode rn: rightNodeList) { + System.out.println(rn); + } + System.out.println(m + " missings"); + for (MissingEdge me: missingEdgeList) { + System.out.println(me); + } + } + + while (!missingEdgeList.isEmpty()) { + if (DEBUG) { + System.out.println(missingEdgeList.size() + " missings"); + for (RightNode rn: rightNodeList) { + System.out.println(rn); + } + } + int[] bestPair = null; + int maxMinCover = 0; + int maxFc = 0; + + for (LeftNode ln: leftNodes) { + for (RightNode rn: rightNodeList) { + if (rn.assignedTo != null || + !rn.neighborSet.get(ln.vertex)) { + continue; + } + steps++; + if (steps > maxSteps) { + return null; + } + rn.assignedTo = ln; + int minCover = minCover(); + int fc = 0; + for (MissingEdge me: missingEdgeList) { + if (me.isFinallyCovered()) { + fc++; + } + } + rn.assignedTo = null; + if (bestPair == null || minCover > maxMinCover) { + maxMinCover = minCover; + bestPair = new int[] {ln.index, rn.index}; + maxFc = fc; + } + else if (minCover == maxMinCover && fc > maxFc) { + bestPair = new int[] {ln.index, rn.index}; + maxFc = fc; + } + } + } + if (maxMinCover == 0) { + return null; + } + + if (DEBUG) { + System.out.println("maxMinCover = " + maxMinCover + + ", maxFC = " + maxFc + + ", bestPair = " + Arrays.toString(bestPair)); + + } + rightNodeList.get(bestPair[1]).assignedTo = + leftNodes[bestPair[0]]; + + ArrayList temp1 = missingEdgeList; + missingEdgeList = new ArrayList<>(); + for (MissingEdge me: temp1) { + if (!me.isFinallyCovered()) { + missingEdgeList.add(me); + } + } + } + + if (DEBUG) { + System.out.println("assignment success"); + for (RightNode rn: rightNodeList) { + System.out.println(rn); + } + } + + for (RightNode rn: rightNodeList) { + if (rn.assignedTo != null) { + int i = rn.assignedTo.index; + result[i].or(rn.vertexSet); + } + } + return result; + } + + void confirmCliqueMinor(XBitSet separator, XBitSet rest, XBitSet[] contracts) { + { + int i = 0; + for (int v = separator.nextSetBit(0); v >= 0; + v = separator.nextSetBit(v + 1)) { + if (!contracts[i].get(v)) { + throw new RuntimeException("Not a clique minor: vertex " + v + + " is not contained in the contracted " + contracts[i]); + } + i++; + } + } + for (int i = 0; i < contracts.length; i++) { + for (int j = i + 1; j < contracts.length; j++) { + if (contracts[i].intersects(contracts[j])) { + throw new RuntimeException("Not a clique minor: contracts " + + contracts[i] + " and " + contracts[j] + " intersect with each other"); + } + if (!g.neighborSet(contracts[i]).intersects(contracts[j])) { + throw new RuntimeException("Not a clique minor: contracts " + + contracts[i] + " and " + contracts[j] + " are not adjacent to each other"); + } + } + } + + for (int i = 0; i < contracts.length; i++) { + if (!g.isConnected(contracts[i])) { + throw new RuntimeException("Not a clique minor: contracted " + + contracts[i] + " is not connected"); + } + } + } + + int minCover() { + int minCover = g.n; + for (MissingEdge me: missingEdgeList) { + if (me.isFinallyCovered()) { + continue; + } + int nCover = 0; + for (RightNode rn: rightNodeList) { + if (rn.potentiallyCovers(me)) { + nCover++; + } + } + if (nCover < minCover) { + minCover = nCover; + } + } + return minCover; + } + + MissingEdge leastCovered() { + int minCover = 0; + MissingEdge result = null; + for (MissingEdge me: missingEdgeList) { + if (me.unAugmentable) { + continue; + } + int nCover = 0; + for (RightNode rn: rightNodeList) { + if (rn.potentiallyCovers(me)) { + nCover++; + } + } + if (result == null || nCover < minCover) { + minCover = nCover; + result = me; + } + } + return result; + } + + MissingEdge zeroCovered() { + for (MissingEdge me: missingEdgeList) { + int nCover = 0; + for (RightNode rn: rightNodeList) { + if (rn.potentiallyCovers(me)) { + nCover++; + } + } + if (nCover == 0) { + return me; + } + } + return null; + } + + boolean connectable(XBitSet vs1, XBitSet vs2) { + XBitSet vs = (XBitSet) vs1.clone(); + while (true) { + XBitSet ns = g.neighborSet(vs); + if (ns.intersects(vs2)) { + return true; + } + ns.and(available); + if (ns.isEmpty()) { + return false; + } + vs.or(ns); + } + } + + void mergeRightNodes(RightNode[] coveringPair) { + RightNode rn1 = coveringPair[0]; + RightNode rn2 = coveringPair[1]; + + XBitSet connected = connect(rn1.vertexSet, rn2.vertexSet); + RightNode rn = new RightNode(connected); + rightNodeList.remove(rn1); + rightNodeList.remove(rn2); + rightNodeList.add(rn); + } + + XBitSet connect(XBitSet vs1, XBitSet vs2) { + ArrayList layerList = new ArrayList<>(); + + XBitSet vs = (XBitSet) vs1.clone(); + while (true) { + XBitSet ns = g.neighborSet(vs); + if (ns.intersects(vs2)) { + break; + } + ns.and(available); + layerList.add(ns); + vs.or(ns); + } + + XBitSet result = vs1.unionWith(vs2); + + XBitSet back = g.neighborSet(vs2); + for (int i = layerList.size() - 1; i >= 0; i--) { + XBitSet ns = layerList.get(i); + ns.and(back); + int v = ns.nextSetBit(0); + result.set(v); + available.clear(v); + back = g.neighborSet[v]; + } + return result; + } + + int countMissings(XBitSet s) { + int count = 0; + for (int v = s.nextSetBit(0); v >= 0; + v = s.nextSetBit(v + 1)) { + count += s.subtract(g.neighborSet[v]).cardinality() - 1; + } + return count / 2; + } + +} diff --git a/solvers/TCS-Meiji/tw/exact/Separator.java b/solvers/TCS-Meiji/tw/exact/Separator.java new file mode 100644 index 0000000..b99025a --- /dev/null +++ b/solvers/TCS-Meiji/tw/exact/Separator.java @@ -0,0 +1,188 @@ +/* + * Copyright (c) 2017, Hisao Tamaki + */ + +package tw.exact; + +import java.util.ArrayList; + +public class Separator { + Bag parent; + Graph graph; + XBitSet vertexSet; + int size; + ArrayList incidentBags; + boolean safe; + boolean unsafe; + boolean wall; + + int[] parentVertex; + + public Separator(Bag parent) { + this.parent = parent; + graph = parent.graph; + incidentBags = new ArrayList<>(); + } + + public Separator(Bag parent, XBitSet vertexSet) { + this(parent); + this.vertexSet = vertexSet; + size = vertexSet.cardinality(); + } + + public void addIncidentBag(Bag bag) { + incidentBags.add(bag); + } + + public void removeVertex(int v) { + if (vertexSet.get(v)) { + size--; + } + vertexSet.clear(v); + } + + public void invert() { + vertexSet = convert(vertexSet, parent.inv); + parent = parent.parent; + } + + public void convert() { + vertexSet = convert(vertexSet, parent.conv); + } + + private XBitSet convert(XBitSet s, int[] conv) { + XBitSet result = new XBitSet(); + for (int v = s.nextSetBit(0); v >= 0; + v = s.nextSetBit(v + 1)) { + result.set(conv[v]); + } + return result; + } + + public void collectBagsToPack(ArrayList list, Bag from) { + for (Bag bag: incidentBags) { + if (bag !=from) { + bag.collectBagsToPack(list, this); + } + } + } + + public void figureOutSafety(SafeSeparator ss) { + if (!safe && !unsafe) { + safe = ss.isSafeSeparator(vertexSet); + unsafe = !safe; + } + } + + public void figureOutSafetyBySPT() { + if (!safe && !unsafe) { + safe = isSafe(); + unsafe = !safe; + } + } + + public boolean isSafe() { + return isSafeBySPT(); + } + + public boolean isSafeBySPT() { + parentVertex = new int[graph.n]; + ArrayList components = + graph.getComponents(vertexSet); + for (XBitSet compo: components) { + if (!isSafeComponentBySPT(compo)) { + return false; + } + } + return true; + } + + private boolean isSafeComponentBySPT(XBitSet component) { + XBitSet neighborSet = graph.neighborSet(component); + XBitSet rest = graph.all.subtract(neighborSet).subtract(component); + + for (int v = neighborSet.nextSetBit(0); v >= 0; + v = neighborSet.nextSetBit(v + 1)) { + XBitSet missing = neighborSet.subtract(graph.neighborSet[v]); + + for (int w = missing.nextSetBit(0); w >= 0 && w <= v; + w = missing.nextSetBit(w + 1)) { + missing.clear(w); + } + + if (!missing.isEmpty()) { + XBitSet spt = shortestPathTree(v, missing, rest); + if (spt == null) { + return false; + } + rest.andNot(spt); + } + } + return true; + } + + private XBitSet shortestPathTree(int v, XBitSet targets, + XBitSet available) { + XBitSet union = available.unionWith(targets); + + XBitSet reached = new XBitSet(graph.n); + reached.set(v); + XBitSet leaves = (XBitSet) reached.clone(); + while (!targets.isSubset(reached) && !leaves.isEmpty()) { + XBitSet newLeaves = new XBitSet(graph.n); + for (int u = leaves.nextSetBit(0); u >= 0; + u = leaves.nextSetBit(u + 1)) { + XBitSet children = + graph.neighborSet[u].intersectWith(union).subtract(reached); + for (int w = children.nextSetBit(0); w >= 0; + w = children.nextSetBit(w + 1)) { + reached.set(w); + parentVertex[w] = u; + if (available.get(w)) { + newLeaves.set(w); + } + } + } + leaves = newLeaves; + } + + if (!targets.isSubset(reached)) { + return null; + } + + XBitSet spt = new XBitSet(graph.n); + for (int u = targets.nextSetBit(0); u >= 0; + u = targets.nextSetBit(u + 1)) { + int w = parentVertex[u]; + while (w != v) { + spt.set(w); + w = parentVertex[w]; + } + } + return spt; + } + + + public void dump(String indent) { + System.out.println(indent + "sep:" + toString()); + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append(vertexSet); + sb.append("("); + for (Bag bag: incidentBags){ + if (bag == null) { + sb.append("null bag "); + } + else { + sb.append(parent.nestedBags.indexOf(bag) + ":" + bag.vertexSet); + sb.append(" "); + } + } + sb.append(")"); + + return sb.toString(); + } + +} diff --git a/solvers/TCS-Meiji/tw/exact/TreeDecomposition.java b/solvers/TCS-Meiji/tw/exact/TreeDecomposition.java new file mode 100644 index 0000000..6205a16 --- /dev/null +++ b/solvers/TCS-Meiji/tw/exact/TreeDecomposition.java @@ -0,0 +1,912 @@ +/* + * Copyright (c) 2016, Hisao Tamaki + */ +package tw.exact; + +import java.io.BufferedReader; + + +import java.io.File; +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.FileReader; +import java.io.IOException; +import java.io.PrintStream; +import java.util.ArrayList; +import java.util.Arrays; + +/** + * This class provides a representation of tree-decompositions of graphs. + * It is based on the {@code Graph} class for the representation of graphs. + * Members representing the bags and tree edges are all public. + * Reading from and writing to files, in the .td format of PACE challeng, + * are provided. + * + * @author Hisao Tamaki + */ + +public class TreeDecomposition { + /** + * number of bags + */ + public int nb; + + /** + * intended width of this decomposition + */ + public int width; + + /** + * the graph decomposed + */ + public Graph g; + + /** + * array of bags, each of which is an int array listing vertices. + * The length of this array is {@code nb + 1} as the bag number (index)[ + * starts from 1 + */ + public int[][] bags; + + /** + * array of bags, each of which is an {@code XBitSet} representing + * the set of vertices in the bag. + * The length of this array is {@code nb + 1} as the bag number (index)[ + * starts from 1. + */ + public XBitSet[] bagSets; + + /** + * array of node degrees. {@code degree[i]} is the number of bags adjacent + * to the ith bag. + * The length of this array is {@code nb + 1} as the bag number (index)[ + * starts from 1. + */ + public int degree[]; + + /** + * array of int arrays representing neighbor lists. + * {@code neighbor[i][j]} is the bag index (in {@bags} array) of + * the jth bag adjacent to the ith bag. + * The length of this array is {@code nb + 1} as the bag number (index)[ + * starts from 1. + */ + public int neighbor[][]; + + private static boolean debug = false; +// private static boolean debug = true; + + /** + * Construct a tree decomposition with the specified number of bags, + * intended width, and the graph decomposed. + * @param nb the number of bags + * @param width the intended width + * @param g the graph decomposed + */ + public TreeDecomposition(int nb, int width, Graph g) { + this.nb = nb; + this.width = width; + this.g = g; + bags = new int[nb + 1][]; + degree = new int[nb + 1]; + neighbor = new int[nb + 1][]; + } + + /** + * Sets the ith bag to the given bag. + * @param i the index of the bag. 1 <= i <= nb must hold + * @param bag int array representing the bag + */ + public void setBag(int i, int[] bag) { + bags[i] = bag; + } + + /** + * Adds the given bag. The number of bags {@code nb} is incremented. + * @param bag int array representing the bag to be added + */ + public int addBag(int[] bag) { + nb++; + if (debug) { + System.out.print(nb + "th bag:"); + } + for (int i = 0; i < bag.length; i++) { + if (debug) { + System.out.print(" " + bag[i]); + } + } + if (debug) { + System.out.println(); + } + bags = Arrays.copyOf(bags, nb + 1); + bags[nb] = bag; + degree = Arrays.copyOf(degree, nb + 1); + neighbor = Arrays.copyOf(neighbor, nb + 1); + if (bagSets != null) { + bagSets = Arrays.copyOf(bagSets, nb + 1); + bagSets[nb] = new XBitSet(bag); + } + return nb; + } + + /** + * Adds and edge + * the neighbor lists of both bags, as well as the degrees, + * are updated + * @param i index of one bag of the edge + * @param j index of the other bag of the edge + */ + public void addEdge(int i, int j) { + if (debug) { + System.out.println("add deomposition edge (" + i + "," + j + ")"); + } + addHalfEdge(i, j); + addHalfEdge(j, i); + } + + /** + * Adds a bag to the neibhor list of another bag + * @param i index of the bag of which the neighbor list is updated + * @param j index of the bag to be added to {@code neighbor[i]} + */ + private void addHalfEdge(int i, int j) { + if (neighbor[i] == null) { + degree[i] = 1; + neighbor[i] = new int[]{j}; + } + else if (indexOf(j, neighbor[i]) < 0){ + degree[i]++; + neighbor[i] = Arrays.copyOf(neighbor[i], degree[i]); + neighbor[i][degree[i] - 1] = j; + } + } + + /** + * Combine the given tree-decomposition into this target tree-decomposition. + * The following situation is assumed. Let G be the graph for which this + * target tree-decomposition is being constructed. Currently, + * this tree-decomposition contains bags for some subgraph of G. + * The tree-decomposition of some other part of G is given by the argument. + * The numbering of the vertices in the argument tree-decomposition differs + * from that in G and the conversion map is provided by another argument. + * @param td tree-decomposition to be combined + * @param conv the conversion map, that maps the vertex number in the graph of + * tree-decomposition {@code td} into the vertex number of the graph of this + * target tree-decomposition. + */ + public void combineWith(TreeDecomposition td, int conv[]) { + this.width = Math.max(this.width, td.width); + int nb0 = nb; + for (int i = 1; i <= td.nb; i++) { + addBag(convertBag(td.bags[i], conv)); + } + for (int i = 1; i <= td.nb; i++) { + for (int j = 0; j < td.degree[i]; j++) { + int h = td.neighbor[i][j]; + addHalfEdge(nb0 + i, nb0 + h); + } + } + } + /** + * Combine the given tree-decomposition into this target tree-decomposition. + * The assumptions are the same as in the method with two parameters. + * The third parameter specifies the way in which the two parts + * of the decompositions are connected by a tree edge of the decomposition. + * + * @param td tree-decomposition to be combined + * @param conv the conversion map, that maps the vertex number in the graph of + * tree-decomposition {@code td} into the vertex number of the graph of this + * target tree-decomposition. + * @param v int array listing vertices: an existing bag containing all of + * these vertices and a bag in the combined part containing all of + * these vertices are connected by a tree edge; if {@code v} is null + * then first bags of the two parts are connected + */ + public void combineWith(TreeDecomposition td, int conv[], int v[]) { + this.width = Math.max(this.width, td.width); + int nb0 = nb; + for (int i = 1; i <= td.nb; i++) { + addBag(convertBag(td.bags[i], conv)); + } + for (int i = 1; i <= td.nb; i++) { + for (int j = 0; j < td.degree[i]; j++) { + int h = td.neighbor[i][j]; + addEdge(nb0 + i, nb0 + h); + } + } + if (nb0 == 0) { + return; + } + if (v == null) { + addEdge(1, nb0 + 1); + } + else { + int k = findBagWith(v, 1, nb0); + int h = findBagWith(v, nb0 + 1, nb); + if (k < 0) { + System.out.println(Arrays.toString(v) + " not found in the first " + nb0 + " bags"); + } + if (h < 0) { + System.out.println(Arrays.toString(v) + " not found in the last " + td.nb + " bags"); + } + addEdge(k, h); + } + } + + /** + * Converts the vetex number in the bag + * @param bag input bag + * @param conv conversion map of the vertices + * @return the bag resulting from the conversion, + * containing {@code conv[v]} for each v in the original bag + */ + + private int[] convertBag(int bag[], int conv[]) { + int[] result = new int[bag.length]; + for (int i = 0; i < bag.length; i++) { + result[i] = conv[bag[i]]; + } + return result; + } + + /** + * Find a bag containing all the listed vertices, + * with bag index in the specified range + * @param v int array listing vertices + * @param s the starting bag index + * @param t the ending bag index + * @return index of the bag containing all the + * vertices listed in {@code v}; -1 if none of the + * bags {@code bags[i]}, s <= i <= t, satisfies this + * condition. + */ + private int findBagWith(int v[], int s, int t) { + for (int i = s; i <= t; i++) { + boolean all = true; + for (int j = 0; j < v.length; j++) { + if (indexOf(v[j], bags[i]) < 0) { + all = false; + } + } + if (all) return i; + } + return -1; + } + + /** + * write this tree decomposition to the given print stream + * in the PACE .td format + * @param ps print stream + */ + public void writeTo(PrintStream ps) { + ps.println("s td " + nb + " " + (width + 1) + " " + g.n); + for (int i = 1; i <= nb; i++) { + ps.print("b " + i); + for (int j = 0; j < bags[i].length; j++) { + ps.print(" " + (bags[i][j] + 1)); + } + ps.println(); + } + for (int i = 1; i <= nb; i++) { + for (int j = 0; j < degree[i]; j++) { + int h = neighbor[i][j]; + if (i < h) { + ps.println(i + " " + h); + } + } + } + } + + /** + * validates this target tree-decomposition + * checking the three required conditions + * The validation result is printed to the + * standard output + */ + public void validate() { + System.out.println("validating nb = " + nb + ", ne = " + numberOfEdges()); + boolean error = false; + if (!isConnected()) { + System.out.println("is not connected "); + error = true; + } + if (isCyclic()) { + System.out.println("has a cycle "); + error = true; + } + if (tooLargeBag()) { + System.out.println("too Large bag "); + error = true; + } + int v = missinVertex(); + if (v >= 0) { + System.out.println("a vertex " + v + " missing "); + error = true; + } + int edge[] = missingEdge(); + if (edge != null) { + System.out.println("an edge " + Arrays.toString(edge) + " is missing "); + error = true; + } + if (violatesConnectivity()) { + System.out.println("connectivety property is violated "); + error = true; + } + if (!error) { + System.out.println("validation ok"); + } + } + + /** + * Checks if this tree-decomposition is connected as + * a graph of bags + * @return {@code true} if this tree-decomposition is connected + * {@cdoe false} otherwise + */ + + private boolean isConnected() { + boolean mark[] = new boolean [nb + 1]; + depthFirst(1, mark); + for (int i = 1; i <= nb; i++) { + if (!mark[i]) { + return false; + } + } + return true; + } + + private void depthFirst(int i, boolean mark[]) { + mark[i] = true; + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + if (!mark[j]) { + depthFirst(j, mark); + } + } + } + + /** + * Checks if this tree-decomposition is acyclic as + * a graph of bags + * @return {@code true} if this tree-decomposition is acyclic + * {@cdoe false} otherwise + */ + + private boolean isCyclic() { + boolean mark[] = new boolean [nb + 1]; + return isCyclic(1, mark, 0); + } + + private boolean isCyclic(int i, boolean mark[], + int parent) { + mark[i] = true; + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + if (j == parent) { + continue; + } + if (mark[j]) { + return true; + } + else { + boolean b = isCyclic(j, mark, i); + if (b) return true; + } + } + return false; + } + + /** + * Checks if the bag size is within the declared + * tree-width plus one + * @return {@code true} if there is some violating bag, + * {@cdoe false} otherwise + */ + private boolean tooLargeBag() { + for (int i = 1; i <= nb; i++) { + if (bags[i].length > width + 1) { + return true; + } + } + return false; + } + + /** + * Finds a vertex of the graph that does not appear + * in any of the bags + * @return the missing vertex number; -1 if there is no + * missing vertex + */ + private int missinVertex() { + for (int i = 0; i < g.n; i++) { + if (!appears(i)) { + return i; + } + } + return -1; + } + + /** + * Checks if the given vertex appears in some bag + * of this target tree-decomposition + * @param v vertex number + * @return {@cod true} if vertex {@code v} appears in + * some bag + */ + private boolean appears(int v) { + for (int i = 1; i <= nb; i++) { + if (indexOf(v, bags[i]) >= 0) { + return true; + } + } + return false; + } + + /** + * Checks if there is some edge not appearing in any + * bag of this target tree-decomposition + * @return two element int array representing the + * missing edge; null if there is no missing edge + */ + private int[] missingEdge() { + for (int i = 0; i < g.n; i++) { + for (int j = 0; j < g.degree[i]; j++) { + int h = g.neighbor[i][j]; + if (!appears(i, h)) { + return new int[]{i, h}; + } + } + } + return null; + } + + /** + * Checks if the edge between the two specified vertices + * appear in some bag of this target tree-decomposition + * @param u one endvertex of the edge + * @param v the other endvertex of the edge + * @return {@code true} if this edge appears in some bag; + * {@code false} otherwise + */ + private boolean appears(int u, int v) { + for (int i = 1; i <= nb; i++) { + if (indexOf(u, bags[i]) >= 0 && + indexOf(v, bags[i]) >= 0) { + return true; + } + } + return false; + } + + /** + * Checks if this target tree-decomposition violates + * the connectivity condition for some vertex of the graph + * @return {@code true} if the condition is violated + * for some vertex; {@code false} otherwise. + */ + private boolean violatesConnectivity() { + for (int v = 1; v <= g.n; v++) { + if (violatesConnectivity(v)) { + return true; + } + } + return false; + } + + /** + * Checks if this target tree-decomposition violates + * the connectivity condition for the given vertex {@code v} + * @param v vertex number + * @return {@code true} it the connectivity condition is violated + * for vertex {@code v} + */ + private boolean violatesConnectivity(int v) { + boolean mark[] = new boolean[nb + 1]; + + for (int i = 1; i <= nb; i++) { + if (indexOf(v, bags[i]) >= 0) { + markFrom(i, v, mark); + break; + } + } + + for (int i = 1; i <= nb; i++) { + if (!mark[i] && indexOf(v, bags[i]) >= 0) { + return true; + } + } + return false; + } + + /** + * Mark the tree nodes (bags) containing the given vertex + * that are reachable from the bag numbered {@code i}, + * without going through the nodes already marked + * @param i bag number + * @param v vertex number + * @param mark boolean array recording the marks: + * {@code mark[v]} represents if vertex {@code v} is marked + */ + private void markFrom(int i, int v, boolean mark[]) { + if (mark[i]) { + return; + } + mark[i] = true; + + for (int j = 0; j < degree[i]; j++) { + int h = neighbor[i][j]; + if (indexOf(v, bags[h]) >= 0) { + markFrom(h, v, mark); + } + } + } + + /** + * Simplify this target tree-decomposition by + * forcing the intersection between each pair of + * adjacent bags to be a minimal separator + */ + + public void minimalize() { + if (bagSets == null) { + bagSets = new XBitSet[nb + 1]; + for (int i = 1; i <= nb; i++) { + bagSets[i] = new XBitSet(bags[i]); + } + } + for (int i = 1; i <= nb; i++) { + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + XBitSet separator = bagSets[i].intersectWith(bagSets[j]); + XBitSet iSide = new XBitSet(g.n); + collectVertices(i, j, iSide); + iSide.andNot(separator); + XBitSet neighbors = g.neighborSet(iSide); + XBitSet delta = separator.subtract(neighbors); + bagSets[i].andNot(delta); + } + } + for (int i = 1; i <= nb; i++) { + bags[i] = bagSets[i].toArray(); + } + } + + /** + * Collect vertices in the bags in the specified + * subtree of this target tree-decomposition + * @param i the bag index of the root of the subtree + * @param exclude the index in the adjacency list + * the specified bag, to be excluded from the subtree + * @param set the {@XBitSet} in which to collect the + * vertices + */ + private void collectVertices(int i, int exclude, XBitSet set) { + set.or(bagSets[i]); + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + if (j != exclude) { + collectVertices(j, i, set); + } + } + } + + /** + * Canonicalize this target tree-decomposition by + * forcing every bag to be a potential maximal clique. + * A naive implementation with no efficiency considerations. + */ + + public void canonicalize() { + if (bagSets == null) { + bagSets = new XBitSet[nb + 1]; + for (int i = 1; i <= nb; i++) { + bagSets[i] = new XBitSet(bags[i]); + } + } + boolean moving = true; + while (moving) { + moving = false; + int i = 1; + while (i <= nb) { + if (trySplit(i)) { + moving = true; + } + i++; + } + } + } + + private boolean trySplit(int i) { + XBitSet neighborSets[] = new XBitSet[g.n]; + XBitSet b = bagSets[i]; + ArrayList components = g.getComponents(b); + XBitSet seps[] = new XBitSet[components.size()]; + for (int j = 0; j < seps.length; j++) { + seps[j] = g.neighborSet(components.get(j)).intersectWith(b); + } + + for (int v = b.nextSetBit(0); v >= 0; + v = b.nextSetBit(v + 1)) { + XBitSet ns = g.neighborSet[v].intersectWith(b); + for (XBitSet sep: seps) { + if (sep.get(v)) { + ns.or(sep); + } + } + ns.clear(v); + neighborSets[v] = ns.intersectWith(b); + } + + for (int v = b.nextSetBit(0); v >= 0; + v = b.nextSetBit(v + 1)) { + XBitSet left = neighborSets[v]; + left.set(v); + XBitSet right = b.subtract(left); + if (right.isEmpty()) { + continue; + } + XBitSet separator = new XBitSet(g.n); + for (int w = right.nextSetBit(0); w >= 0; + w = right.nextSetBit(w + 1)) { + separator.or(neighborSets[w]); + } + right.or(separator); + + int j = addBag(right.toArray()); + + bags[i] = left.toArray(); + bagSets[i] = left; + + int ni = 0; + int nj = 0; + neighbor[j] = new int[degree[i]]; + for (int k = 0; k < degree[i]; k++) { + int h = neighbor[i][k]; + if (bagSets[h].intersects(left)) { + neighbor[i][ni++] = h; + } + else { + neighbor[j][nj++] = h; + } + } + degree[i] = ni; + degree[j] = nj; + neighbor[i] = Arrays.copyOf(neighbor[i], ni); + neighbor[j] = Arrays.copyOf(neighbor[j], nj); + + addEdge(i, j); + + for (int k = 0; k < nj; k++) { + int h = neighbor[j][k]; + for (int l = 0; l < degree[h]; l++) { + if (neighbor[h][l] == i) { + neighbor[h][l] = j; + } + } + } + return true; + } + return false; + } + + /** + * Tests if the target tree-decomposition is canonical, + * i.e., consists of potential maximal cliques. + */ + + public boolean isCanonical() { + for (int i = 1; i <= nb; i++) { + if (!isCanonicalBag(new XBitSet(bags[i]))) { + return false; + } + } + return true; + } + + private boolean isCanonicalBag(XBitSet b) { + ArrayList components = g.getComponents(b); + + for (int v = b.nextSetBit(0); v >= 0; + v = b.nextSetBit(v + 1)) { + for (int w = b.nextSetBit(v + 1); w >= 0; + w = b.nextSetBit(w + 1)) { + if (g.neighborSet[v].get(w)) { + continue; + } + boolean covered = false; + for (XBitSet compo: components) { + XBitSet ns = g.neighborSet(compo); + if (ns.get(v) && ns.get(w)) { + covered = true; + break; + } + } + if (!covered) { + return false; + } + } + } + return true; + } + + public void analyze(int rootIndex) { + if (bagSets == null) { + bagSets = new XBitSet[nb + 1]; + for (int i = 1; i <= nb; i++) { + bagSets[i] = new XBitSet(bags[i]); + } + } + + analyze(rootIndex, -1); + } + + private void analyze(int i, int exclude) { + System.out.println(i + ": " + bagSets[i]); + XBitSet separator = bagSets[i]; + XBitSet set[] = new XBitSet[degree[i]]; + + ArrayList components = g.getComponents(separator); + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + set[a] = new XBitSet(g.n); + collectVertices(j, i, set[a]); + } + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + if (j != exclude) { + System.out.println(" subtree at " + j); + for (XBitSet compo: components) { + if (compo.isSubset(set[a])) { + System.out.println(" contains " + compo); + } + else if (compo.intersects(set[a])) { + System.out.println(" intersects " + compo); + System.out.println(" but missing " + + compo.subtract(set[a])); + } + } + } + } + for (XBitSet compo: components) { + boolean intersecting = false; + for (int a = 0; a < degree[i]; a++) { + if (compo.intersects(set[a])) { + intersecting = true; + } + } + if (!intersecting) { + System.out.println(" component totally missing: " + + compo); + } + } + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + if (j != exclude) { + analyze(j, i); + } + } + } + + /** + * Computes the number of tree edges of this tree-decomosition, + * which is the sum of the node degrees divides by 2 + * @return the number of edges + */ + private int numberOfEdges() { + int count = 0; + for (int i = 1; i <= nb; i++) { + count += degree[i]; + } + return count / 2; + } + /** + * Finds the index at which the given element + * is found in the given array. + * @param x int value to be searched + * @param a int array in which to find {@code x} + * @return {@code i} such that {@code a[i] == x}; + * -1 if none such index exists + */ + + private int indexOf(int x, int a[]) { + return indexOf(x, a, a.length); + } + + /** + * Finds the index at which the given element + * is found in the given array. + * @param x int value to be searched + * @param a int array in which to find {@code x} + * @param n the number of elements to be searched + * in the array + * @return {@code i} such that {@code a[i] == x} and + * 0 <= i <= n; -1 if none such index exists + */ + private int indexOf(int x, int a[], int n) { + for (int i = 0; i < n; i++) { + if (x == a[i]) { + return i; + } + } + return -1; + } + + /** + * Reads the tree-decomposition for a given graph from + * a file at a given path and with a given name, in the + * PACE .gr format; the extension .gr is added to the name. + * @param path path at which the file is found + * @param name file name, without the extension + * @param g graph + * @return the tree-decomposition read + */ + public static TreeDecomposition readDecomposition(String path, String name, Graph g) { + File file = new File(path + "/" + name + ".td"); + try { + BufferedReader br = new BufferedReader(new FileReader(file)); + String line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + if (line.startsWith("s")) { + String s[] = line.split(" "); + if (!s[1].equals("td")) { + throw new RuntimeException("!!Not treewidth solution " + line); + } + int nb = Integer.parseInt(s[2]); + int width = Integer.parseInt(s[3]) - 1; + int n = Integer.parseInt(s[4]); + + System.out.println("nb = " + nb + ", width = " + width + ", n = " + n); + TreeDecomposition td = new TreeDecomposition(0, width, g); + + for (int i = 0; i < nb; i++) { + line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + s = line.split(" "); + + if (!s[0].equals("b")) { + throw new RuntimeException("!!line starting with 'b' expected"); + } + + if (!s[1].equals(Integer.toString(i + 1))) { + throw new RuntimeException("!!Bag number " + (i + 1) + " expected"); + } + + int bag[] = new int[s.length - 2]; + for (int j = 0; j < bag.length; j++) { + bag[j] = Integer.parseInt(s[j + 2]) - 1; + } + td.addBag(bag); + } + + while (true) { + line = br.readLine(); + while (line != null && line.startsWith("c")) { + line = br.readLine(); + } + if (line == null) { + break; + } + + s = line.split(" "); + + int j = Integer.parseInt(s[0]); + int k = Integer.parseInt(s[1]); + + td.addEdge(j, k); + td.addEdge(k, j); + } + + return td; + } + } catch (FileNotFoundException e) { + e.printStackTrace(); + } catch (IOException e) { + e.printStackTrace(); + } + return null; + } +} diff --git a/solvers/TCS-Meiji/tw/exact/Unsigned.java b/solvers/TCS-Meiji/tw/exact/Unsigned.java new file mode 100644 index 0000000..c84f8d3 --- /dev/null +++ b/solvers/TCS-Meiji/tw/exact/Unsigned.java @@ -0,0 +1,177 @@ +/* + * Copyright (c) 2017, Hiromu Otsuka + */ + +package tw.exact; + +public class Unsigned{ + private Unsigned(){} + + public static final long ALL_ONE_BIT = 0xFFFFFFFFFFFFFFFFL; + + public static long consecutiveOneBit(int i, int j){ + return (ALL_ONE_BIT >>> (64 - j)) & (ALL_ONE_BIT << i); + } + + public static byte byteValue(long value){ + return (byte)value; + } + + public static short shortValue(long value){ + return (short)value; + } + + public static int intValue(long value){ + return (int)value; + } + + public static int toUnsignedInt(byte b){ + return Byte.toUnsignedInt(b); + } + + public static int toUnsignedInt(short s){ + return Short.toUnsignedInt(s); + } + + public static long toUnsignedLong(byte b){ + return Byte.toUnsignedLong(b); + } + + public static long toUnsignedLong(short s){ + return Short.toUnsignedLong(s); + } + + public static long toUnsignedLong(int i){ + return Integer.toUnsignedLong(i); + } + + public static int binarySearch(byte[] a, byte key){ + return binarySearch(a, 0, a.length, key); + } + + public static int binarySearch(short[] a, short key){ + return binarySearch(a, 0, a.length, key); + } + + public static int binarySearch(int[] a, int key){ + return binarySearch(a, 0, a.length, key); + } + + public static int binarySearch(long[] a, long key){ + return binarySearch(a, 0, a.length, key); + } + + public static int compare(byte a, byte b){ + return Integer.compareUnsigned( + toUnsignedInt(a), toUnsignedInt(b)); + } + + public static int compare(short a, short b){ + return Integer.compareUnsigned( + toUnsignedInt(a), toUnsignedInt(b)); + } + + public static int compare(int a, int b){ + return Integer.compareUnsigned(a, b); + } + + public static int compare(long a, long b){ + return Long.compareUnsigned(a, b); + } + + public static int binarySearch(byte[] a, + int fromIndex, int toIndex, byte key){ + int low = fromIndex; + int high = toIndex - 1; + + while(low <= high) { + int mid = (low + high) >>> 1; + byte midVal = a[mid]; + int cmp = compare(midVal, key); + + if(cmp < 0){ + low = mid + 1; + } + else if(cmp > 0){ + high = mid - 1; + } + else{ + return mid; + } + } + + return -(low + 1); + } + + public static int binarySearch(short[] a, + int fromIndex, int toIndex, short key){ + int low = fromIndex; + int high = toIndex - 1; + + while(low <= high) { + int mid = (low + high) >>> 1; + short midVal = a[mid]; + int cmp = compare(midVal, key); + + if(cmp < 0){ + low = mid + 1; + } + else if(cmp > 0){ + high = mid - 1; + } + else{ + return mid; + } + } + + return -(low + 1); + } + + public static int binarySearch(int[] a, + int fromIndex, int toIndex, int key){ + int low = fromIndex; + int high = toIndex - 1; + + while(low <= high) { + int mid = (low + high) >>> 1; + int midVal = a[mid]; + int cmp = compare(midVal, key); + + if(cmp < 0){ + low = mid + 1; + } + else if(cmp > 0){ + high = mid - 1; + } + else{ + return mid; + } + } + + return -(low + 1); + } + + public static int binarySearch(long[] a, + int fromIndex, int toIndex, long key){ + int low = fromIndex; + int high = toIndex - 1; + + while(low <= high) { + int mid = (low + high) >>> 1; + long midVal = a[mid]; + int cmp = compare(midVal, key); + + if(cmp < 0){ + low = mid + 1; + } + else if(cmp > 0){ + high = mid - 1; + } + else{ + return mid; + } + } + + return -(low + 1); + } +} diff --git a/solvers/TCS-Meiji/tw/exact/XBitSet.java b/solvers/TCS-Meiji/tw/exact/XBitSet.java new file mode 100644 index 0000000..e5622a0 --- /dev/null +++ b/solvers/TCS-Meiji/tw/exact/XBitSet.java @@ -0,0 +1,308 @@ +/* + * Copyright (c) 2016, Hisao Tamaki + */ +package tw.exact; + +import java.util.BitSet; +import java.util.Comparator; + +/** + * This class extends {@code java.util.BitSet} which implements + * a variable length bit vector. + * The main purpose is to provide methods that create + * a new vector as a result of a set operation such as + * union and intersection, rather than modifying the + * existing one. See API documentation for {@code java.util.BitSet}. + * + * @author Hisao Tamaki + */ + +public class XBitSet extends BitSet + implements Comparable{ + + /** + * Creates an empty {@code XBitSet}. + */ + public XBitSet() { + super(); + } + + /** + * Creates an empty {@code XBitSet} whose initial size is large enough to explicitly + * contain members smaller than {@code n}. + * + * @param n the initial size of the {@code XBitSet} + * @throws NegativeArraySizeException if the specified initial size + * is negative + */ + public XBitSet(int n) { + super(n); + } + + /** + * Creates an {@code XBitSet} with members provided by an array + * + * @param a an array of members to be in the {@code XBitSet} + */ + public XBitSet(int a[]) { + super(); + for (int i = 0; i < a.length; i++) { + set(a[i]); + } + } + + /** + * Creates an {@code XBitSet} with members provided by an array. + * The initial size is large enough to explicitly + * contain members smaller than {@code n}. + * + * @param n the initial size of the {@code XBitSet} + * @param a an array of indices where the bits should be set + * @throws NegativeArraySizeException if the specified initial size + * is negative + */ + public XBitSet(int n, int a[]) { + super(n); + for (int i = 0; i < a.length; i++) { + set(a[i]); + } + } + + /** + * Returns {@code true} if this target {@code XBitSet} is a subset + * of the argument {@code XBitSet} + * + * @param set an {@code XBitSet} + * @return boolean indicating whether this {@code XBitSet} is a subset + * of the argument {@code XBitSet} + */ + public boolean isSubset(XBitSet set) { + BitSet tmp = (BitSet) this.clone(); + tmp.andNot(set); + return tmp.isEmpty(); + } + + /** + * Returns {@code true} if this target {@code XBitSet} is disjoint + * from the argument {@code XBitSet} + * + * @param set an {@code XBitSet} + * @return boolean indicating whether this {@code XBitSet} is + * disjoint from the argument {@code XBitSet} + */ + public boolean isDisjoint(XBitSet set) { + BitSet tmp = (BitSet) this.clone(); + tmp.and(set); + return tmp.isEmpty(); + } + + /** + * Returns {@code true} if this target {@code XBitSet} has a + * non-empty intersection with the argument {@code XBitSet} + * + * @param set an {@code XBitSet} + * @return boolean indicating whether this {@code XBitSet} + * intersects with the argument {@code XBitSet} + */ + + public boolean intersects(XBitSet set) { + return super.intersects(set); + } + + /** + * Returns {@code true} if this target {@code XBitSet} is a superset + * of the argument bit set + * + * @param set an {@code XBitSet} + * @return boolean indicating whether this {@code XBitSet} is a superset + * of the argument {@code XBitSet} + */ + public boolean isSuperset(XBitSet set) { + BitSet tmp = (BitSet) set.clone(); + tmp.andNot(this); + return tmp.isEmpty(); + } + + /** + * Returns a {@code XBitSet} that is the union of this + * target {@code XBitSet} and the argument {@code XBitSet} + * + * @param set an {@code XBitSet} + * @return the union {@code XBitSet} + */ + public XBitSet unionWith(XBitSet set) { + XBitSet result = (XBitSet) this.clone(); + result.or(set); + return result; + } + + /** + * Returns an {@code XBitSet} that is the intersection of this + * target {@code XBitSet} and the argument {@code XBitSet} + * + * @param set an {@code XBitSet} + * @return the intersection {@code XBitSet} + */ + public XBitSet intersectWith(XBitSet set) { + XBitSet result = (XBitSet) this.clone(); + result.and(set); + return result; + } + + /** + * Returns an {@code XBitSet} that is the result of + * of removing the members of the argument {@code XBitSet} + * from the target {@code XBitSet}. + * @param set an {@code XBitSet} + * @return the difference {@code XBitSet} + */ + public XBitSet subtract(XBitSet set) { + XBitSet result = (XBitSet) this.clone(); + result.andNot(set); + return result; + } + + /** + * Returns {@code true} if the target {@code XBitSet} has a member + * that is smaller than the smallest member of the argument {@code XBitSet}. + * Both the target and the argument {@code XBitSet} must be non-empty + * to ensure a meaningful result. + * @param set an {@code XBitSet} + * @return {@code true} if the target {@code XBitSet} has a member + * smaller than the smallest member of the argument {@code XBitSet}; + * {@code false} otherwise + */ + public boolean hasSmaller(XBitSet set) { + assert !isEmpty() && !set.isEmpty(); + return this.nextSetBit(0) < set.nextSetBit(0); + } + + @Override + /** + * Compare the target {@code XBitSet} with the argument + * {@code XBitSet}, where the bit vectors are viewed as + * binary representation of an integer, the bit {@code i} + * set meaning that the number contains {@code 2^i}. + * @return negative value if the target is smaller, positive if it is + * larger, and zero if it equals the argument + */ + public int compareTo(XBitSet set) { + int l1 = this.length(); + int l2 = set.length(); + if (l1 != l2) { + return l1 - l2; + } + for (int i = l1 - 1; i >= 0; i--) { + if (this.get(i) && !set.get(i)) return 1; + else if (!this.get(i) && set.get(i)) return -1; + } + return 0; + } + + /** + * Converts the target {@code XBitSet} into an array + * that contains all the members in the set + * @return the array representation of the set + */ + public int[] toArray() { + int[] result = new int[cardinality()]; + int k = 0; + for (int i = nextSetBit(0); i >=0; i= nextSetBit(i + 1)) { + result[k++] = i; + } + return result; + } + + /** + * Checks if this target bit set has an element + * that is smaller than every element in + * the argument bit set + * @param vs bit set + * @return {@code true} if this bit set has an element + * smaller than every element in {@code vs} + */ + public boolean hasSmallerVertexThan(XBitSet vs) { + if (this.isEmpty()) return false; + else if (vs.isEmpty()) return true; + else return nextSetBit(0) < vs.nextSetBit(0); + } + + /** + * holds the reference to an instance of the {@code DescendingComparator} + * for {@code BitSet} + */ + public static final Comparator descendingComparator = + new DescendingComparator(); + + /** + * holds the reference to an instance of the {@code AscendingComparator} + * for {@code BitSet} + */ + public static final Comparator ascendingComparator = + new AscendingComparator(); + + /** + * holds the reference to an instance of the {@code CardinalityComparator} + * for {@code BitSet} + */ + public static final Comparator cardinalityComparator = + new CardinalityComparator(); + + /** + * A comparator for {@code BitSet}. The {@code compare} + * method compares the two vectors in the lexicographic order + * where the highest bit is the most significant. + */ + public static class DescendingComparator implements Comparator { + @Override + public int compare(BitSet s1, BitSet s2) { + int l1 = s1.length(); + int l2 = s2.length(); + if (l1 != l2) { + return l1 - l2; + } + for (int i = l1 - 1; i >= 0; i--) { + if (s1.get(i) && !s2.get(i)) return 1; + else if (!s1.get(i) && s2.get(i)) return -1; + } + return 0; + } + } + + /** + * A comparator for {@code BitSet}. The {@code compare} method compares + * the two vectors in the lexicographic order where the + * lowest bit is the most significant. + */ + public static class AscendingComparator implements Comparator { + @Override + public int compare(BitSet s1, BitSet s2) { + int l1 = s1.length(); + int l2 = s2.length(); + + for (int i = 0; i < Math.min(l1, l2); i++) { + if (s1.get(i) && !s2.get(i)) return 1; + else if (!s1.get(i) && s2.get(i)) return -1; + } + return l1 - l2; + } + } + + /** + * A comparator for {@code BitSet}. The {@code compare} method compares + * the two sets in terms of the cardinality. In case of + * a tie, the two sets are compared by the {@code AscendingComparator} + */ + public static class CardinalityComparator implements Comparator { + @Override + public int compare(BitSet s1, BitSet s2) { + int c1 = s1.cardinality(); + int c2 = s2.cardinality(); + if (c1 != c2) { + return c1 - c2; + } + else + return ascendingComparator.compare(s1, s2); + } + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/ArraySet.java b/solvers/TCS-Meiji/tw/heuristic/ArraySet.java new file mode 100644 index 0000000..97aeff6 --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/ArraySet.java @@ -0,0 +1,635 @@ +/* + * Copyright (c) 2017, Hiromu Ohtsuka +*/ + +package tw.heuristic; + +import java.util.Arrays; +import java.util.Comparator; + +public class ArraySet +implements Comparable< ArraySet >, Cloneable{ + public static final int DEFAULT_INITIAL_CAPACITY = 64; + int size; + int[] a; + int hash = 1; + + int index0; + + public ArraySet(){ + this(DEFAULT_INITIAL_CAPACITY); + } + + public ArraySet(int[] a){ + this.size = a.length; + this.a = Arrays.copyOf(a, a.length); + Arrays.sort(this.a); + rehash(); + } + + public ArraySet(int initialCapacity){ + this.a = new int[initialCapacity]; + rehash(); + } + + public ArraySet(int initialCapacity, int[] a){ + this(initialCapacity); + for(int i = 0; i < a.length; i++){ + this.a[i] = a[i]; + } + Arrays.sort(this.a, 0, a.length); + rehash(); + } + + public boolean isSubset(ArraySet set){ + int i = 0, j = 0; + while(i < size && j < set.size){ + if(a[i] < set.a[j]){ + return false; + } + else if(a[i] > set.a[j]){ + ++j; + } + else{ + ++i; ++j; + } + } + return i == size; + } + + public boolean isDisjoint(ArraySet set){ + return !intersects(set); + } + + public boolean intersects(ArraySet set){ + int i = 0, j = 0; + while(i < size && j < set.size){ + if(a[i] < set.a[j]){ + ++i; + } + else if(a[i] > set.a[j]){ + ++j; + } + else{ + return true; + } + } + return false; + } + + public boolean isSuperset(ArraySet set){ + return set.isSubset(this); + } + + public ArraySet unionWith(ArraySet set){ + int i = 0, j = 0, k = 0; + while(i < size && j < set.size){ + if(a[i] < set.a[j]){ + ++i; + } + else if(a[i] > set.a[j]){ + ++j; + } + else{ + ++k; ++i; ++j; + } + } + + int[] result = new int[size + set.size - k]; + i = j = k = 0; + while(i < size && j < set.size){ + if(a[i] < set.a[j]){ + result[k++] = a[i++]; + } + else if(a[i] > set.a[j]){ + result[k++] = set.a[j++]; + } + else{ + result[k++] = a[i]; + ++i; ++j; + } + } + while(i < size){ + result[k++] = a[i++]; + } + while(j < set.size){ + result[k++] = set.a[j++]; + } + + return new ArraySet(result); + } + + public ArraySet intersectWith(ArraySet set){ + int i = 0, j = 0, k = 0; + while(i < size && j < set.size){ + if(a[i] < set.a[j]){ + ++i; + } + else if(a[i] > set.a[j]){ + ++j; + } + else{ + ++i; ++j; + ++k; + } + } + + int[] result = new int[k]; + i = j = k = 0; + while(i < size && j < set.size){ + if(a[i] < set.a[j]){ + ++i; + } + else if(a[i] > set.a[j]){ + ++j; + } + else{ + result[k++] = a[i]; + ++i; ++j; + } + } + + return new ArraySet(result); + } + + public ArraySet subtract(ArraySet set){ + ArraySet result = (ArraySet)this.clone(); + result.andNot(set); + return result; + } + + public boolean hasSmaller(ArraySet set){ + return a[0] < set.a[0]; + } + + public int[] toArray(){ + return Arrays.copyOf(a, size); + } + + public boolean hasSmallerVertexThan(ArraySet set){ + if(isEmpty()){ + return false; + } + if(set.isEmpty()){ + return true; + } + return a[0] < set.a[0]; + } + + public int cardinality(){ + return size; + } + + public void and(ArraySet set){ + int i = 0, j = 0, k = 0; + int[] cpa = Arrays.copyOf(a, size); + int cpsize = size; + + clear(); + + while(i < cpsize && j < set.size){ + if(cpa[i] < set.a[j]){ + ++i; + } + else if(cpa[i] > set.a[j]){ + ++j; + } + else{ + a[size++] = cpa[i]; + ++i; ++j; + } + } + + rehash(); + } + + public void andNot(ArraySet set){ + int i = 0, j = 0, k = 0; + int[] cpa = Arrays.copyOf(a, size); + int cpsize = size; + + clear(); + + while(i < cpsize && j < set.size){ + if(cpa[i] < set.a[j]){ + a[size++] = cpa[i]; + ++i; + } + else if(cpa[i] > set.a[j]){ + ++j; + } + else{ + ++i; ++j; ++k; + } + } + while(i < cpsize){ + a[size++] = cpa[i++]; + } + + rehash(); + } + + public void clear(){ + size = 0; + rehash(); + } + + public void clear(int i){ + int j = Arrays.binarySearch(a, 0, size, i); + if(j >= 0){ + for(int k = j; k + 1 < size; k++){ + a[k] = a[k + 1]; + } + --size; + rehash(); + } + } + + public void clear(int fromIndex, int toIndex){ + for(int i = fromIndex; i < toIndex; i++){ + clear(i); + } + } + + public void flip(int i){ + set(i, !get(i)); + } + + public void flip(int fromIndex, int toIndex){ + for(int i = fromIndex; i < toIndex; i++){ + flip(i); + } + } + + public boolean get(int i){ + return Arrays.binarySearch(a, 0, size, i) >= 0; + } + + private int insertionPointOf(int i){ + int j = Arrays.binarySearch(a, 0, size, i); + if(j >= 0){ + return -1; + } + return -j - 1; + } + + public ArraySet get(int fromIndex, int toIndex){ + throw new UnsupportedOperationException(); + } + + @Override + public int hashCode(){ + int seed = 1234; + return seed ^ hash; + } + + private void rehash(){ + hash = 1; + for(int i = 0; i < size; i++){ + hash = 31 * hash + a[i]; + } + } + + public int length(){ + if(isEmpty()){ + return 0; + } + return a[size - 1] + 1; + } + + public int nextClearBit(int fromIndex){ + int lb = lowerBound(fromIndex); + if(lb == size || a[lb] > fromIndex){ + return fromIndex; + } + return nextClearBit(fromIndex + 1); + } + + public int nextSetBit(int fromIndex){ + /* + if(isEmpty()){ + return -1; + } + int lb = lowerBound(fromIndex); + if(lb == size){ + return -1; + } + return a[lb]; + */ + if(isEmpty() || (fromIndex > a[size - 1])){ + index0 = 0; + return -1; + } + if(index0 + 1 < size && + (fromIndex > a[index0] && fromIndex <= a[index0 + 1])){ + return a[++index0]; + } + else{ + index0 = lowerBound(fromIndex); + return a[index0]; + } + } + + private int lowerBound(int i){ + int j = Arrays.binarySearch(a, 0, size, i); + if(j >= 0){ + return j; + } + return -j - 1; + } + + public void or(ArraySet set){ + int[] cpa = Arrays.copyOf(a, size); + int cpsize = size; + + clear(); + + int i = 0, j = 0; + while(i < cpsize && j < set.size){ + ensureCapasity(); + if(cpa[i] < set.a[j]){ + a[size++] = cpa[i++]; + } + else if(cpa[i] > set.a[j]){ + a[size++] = set.a[j++]; + } + else{ + a[size++] = cpa[i]; + ++i; ++j; + } + } + while(i < cpsize){ + ensureCapasity(); + a[size++] = cpa[i++]; + } + while(j < set.size){ + ensureCapasity(); + a[size++] = set.a[j++]; + } + + rehash(); + } + + public int previousClearBit(int fromIndex){ + throw new UnsupportedOperationException(); + } + + public int previousSetBit(int fromIndex){ + throw new UnsupportedOperationException(); + } + + public void set(int i){ + int j = insertionPointOf(i); + if(j >= 0){ + ensureCapasity(); + for(int k = size; k - 1 >= j; k--){ + a[k] = a[k - 1]; + } + a[j] = i; + ++size; + rehash(); + } + } + + public void set(int i, boolean value){ + if(value){ + set(i); + } + else{ + clear(i); + } + } + + public void set(int fromIndex, int toIndex){ + for(int i = fromIndex; i < toIndex; i++){ + set(i); + } + } + + public void set(int fromIndex, int toIndex, boolean value){ + for(int i = fromIndex; i < toIndex; i++){ + set(i, value); + } + } + + public int size(){ + throw new UnsupportedOperationException(); + } + + @Override + public String toString(){ + StringBuilder sb = new StringBuilder(); + sb.append("{"); + for(int i = 0; i < size; i++){ + sb.append(a[i]); + if(i != size - 1){ + sb.append(", "); + } + } + sb.append("}"); + return sb.toString(); + } + + public void xor(ArraySet set){ + int i = 0, j = 0; + int[] cpa = Arrays.copyOf(a, size); + int cpsize = size; + + clear(); + + while(i < cpsize && j < set.size){ + ensureCapasity(); + if(cpa[i] < set.a[j]){ + a[size++] = cpa[i++]; + } + else if(cpa[i] > set.a[j]){ + a[size++] = set.a[j++]; + } + else{ + ++i; ++j; + } + } + while(i < cpsize){ + ensureCapasity(); + a[size++] = cpa[i++]; + } + while(j < set.size){ + ensureCapasity(); + a[size++] = set.a[j++]; + } + + rehash(); + } + + public boolean isEmpty(){ + return size == 0; + } + + @Override + public int compareTo(ArraySet set){ + if(isEmpty() || set.isEmpty()){ + if(isEmpty() && !set.isEmpty()){ + return -1; + } + else if(!isEmpty() && set.isEmpty()){ + return 1; + } + else{ + return 0; + } + } + + int i = size - 1, j = set.size - 1; + while(i >= 0 && j >= 0){ + if(a[i] < set.a[j]){ + return -1; + } + else if(a[i] > set.a[j]){ + return 1; + } + } + + return 0; + } + + @Override + public boolean equals(Object obj){ + if(!(obj instanceof ArraySet)){ + return false; + } + ArraySet set = (ArraySet)obj; + if(size != set.size){ + return false; + } + return size == set.size && + equals(a, set.a, 0, size); + } + + private void ensureCapasity(){ + if(a.length == size){ + a = Arrays.copyOf(a, 2 * size + 1); + } + } + + private static boolean equals( + int[] a1, int[] a2, int fromIndex, int toIndex){ + for(int i = fromIndex; i < toIndex; i++){ + if(a1[i] != a2[i]){ + return false; + } + } + return true; + } + + @Override + public ArraySet clone(){ + try{ + ArraySet result = (ArraySet)super.clone(); + result.a = Arrays.copyOf(a, a.length); + return result; + } + catch(CloneNotSupportedException e){ + throw new AssertionError(); + } + } + + public byte[] toByteArray(){ + if(isEmpty()){ + return new byte[0]; + } + + byte[] result = new byte[a[size - 1] / 8 + 1]; + for(int i = 0; i < size; i++){ + result[a[i] / 8] |= 1 << (a[i] % 8); + } + + return result; + } + + public long[] toLongArray(){ + if(isEmpty()){ + return new long[0]; + } + + long[] result = new long[a[size - 1] / 64 + 1]; + for(int i = 0; i < size; i++){ + result[a[i] / 64] |= 1L << (a[i] % 64); + } + + return result; + } + + public static final Comparator< ArraySet > + descendingComparator = new Comparator< ArraySet >(){ + @Override + public int compare(ArraySet set1, ArraySet set2){ + if(set1.isEmpty() || set2.isEmpty()){ + if(set1.isEmpty() && !set2.isEmpty()){ + return -1; + } + else if(!set1.isEmpty() && set2.isEmpty()){ + return 1; + } + else{ + return 0; + } + } + + int i = set1.size - 1, j = set2.size - 1; + while(i >= 0 && j >= 0){ + if(set1.a[i] < set2.a[j]){ + return -1; + } + else if(set1.a[i] > set2.a[j]){ + return 1; + } + } + + return 0; + } + }; + + public static final Comparator< ArraySet > + ascendingComparator = new Comparator< ArraySet >(){ + @Override + public int compare(ArraySet set1, ArraySet set2){ + if(set1.isEmpty() || set2.isEmpty()){ + if(set1.isEmpty() && !set2.isEmpty()){ + return -1; + } + else if(!set1.isEmpty() && set2.isEmpty()){ + return 1; + } + else{ + return 0; + } + } + + int i = 0, j = 0; + while(i < set1.size && j < set2.size){ + if(set1.a[i] < set2.a[j]){ + return 1; + } + else if(set1.a[i] > set2.a[j]){ + return -1; + } + } + + return Integer.compare( + set1.a[set1.size - 1], set2.a[set2.size - 1]); + } + }; + + public static final Comparator< ArraySet > + cardinalityComparator = new Comparator< ArraySet >(){ + @Override + public int compare(ArraySet set1, ArraySet set2){ + int c1 = set1.cardinality(); + int c2 = set2.cardinality(); + if(c1 != c2){ + return Integer.compare(c1, c2); + } + return ascendingComparator.compare(set1, set2); + } + }; +} diff --git a/solvers/TCS-Meiji/tw/heuristic/Bag.java b/solvers/TCS-Meiji/tw/heuristic/Bag.java new file mode 100644 index 0000000..afcf287 --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/Bag.java @@ -0,0 +1,610 @@ +/* + * Copyright (c) 2017, Hisao Tamaki and Hiromu Ohtsuka, Keitaro Makii +*/ + +package tw.heuristic; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Map; +import java.util.HashMap; + +public class Bag implements Cloneable{ + Bag parent; + VertexSet vertexSet; + int size; + Graph graph; + int conv[]; + int inv[]; + ArrayList nestedBags; + ArrayList separators; + ArrayList incidentSeparators; + int width; + int separatorWidth; + int lowerBound; + int inheritedLowerBound; + boolean optimal; + + static final boolean DEBUG = false; + + public Bag(Graph graph) { + this(null, graph.all); + this.graph = graph; + } + + public Bag(Bag parent, VertexSet vertexSet) { + this.parent = parent; + this.vertexSet = vertexSet; + size = vertexSet.cardinality(); + incidentSeparators = new ArrayList<>(); + } + + public void initializeForDecomposition() { + if (graph == null) { + if (parent == null) { + throw new RuntimeException("graph not available for decomposition"); + } + else { + makeLocalGraph(); + } + } + nestedBags = new ArrayList<>(); + separators = new ArrayList<>(); + width = 0; + separatorWidth = 0; + } + + public void attachSeparator(Separator separator) { + incidentSeparators.add(separator); + } + + public void makeRefinable() { + makeLocalGraph(); + nestedBags = new ArrayList<>(); + separators = new ArrayList<>(); + } + + public int maxNestedBagSize() { + if (nestedBags != null) { + int max = 0; + for (Bag bag:nestedBags) { + if (bag.size > max) { + max = bag.size; + } + } + return max; + } + return -1; + } + + public Bag addNestedBag(VertexSet vertexSet) { + Bag bag = new Bag(this, vertexSet); + nestedBags.add(bag); + return bag; + } + + public Separator addSeparator(VertexSet vertexSet) { + Separator separator = new Separator(this, vertexSet); + separators.add(separator); + return separator; + } + + public void addIncidentSeparator(Separator separator) { + incidentSeparators.add(separator); + } + + public void makeLocalGraph() { + graph = new Graph(size); + conv = new int[parent.size]; + inv = new int[size]; + + VertexSet vertexSet = this.vertexSet; + + int k = 0; + for (int v = 0; v < parent.size; v++) { + if (vertexSet.get(v)) { + conv[v] = k; + inv[k++] = v; + } + else { + conv[v] = -1; + } + } + + graph.inheritEdges(parent.graph, conv, inv); + + // System.out.println("filling all, " + incidentSeparators.size() + " incident separators"); + for (Separator separator: incidentSeparators) { + // System.out.println("filling " + separator); + graph.fill(convert(separator.vertexSet, conv)); + } + } + + public int getWidth() { + if (nestedBags == null) { + return size - 1; + } + int max = 0; + for (Bag bag: nestedBags) { + int w = bag.getWidth(); + if (w > max) { + max = w; + } + } + /* + for (Separator separator: separators) { + int w = separator.vertexSet.cardinality(); + if (w > max) { + max = w; + } + } + */ + return max; + + } + + public void setWidth() { + // assumes that the bag is flat + + // System.out.println("setWidth for " + this.vertexSet); + // System.out.println("nestedBags = " + nestedBags); + + if (nestedBags == null) { + width = size - 1; + separatorWidth = 0; + return; + } + + width = 0; + separatorWidth = 0; + + for (Bag bag: nestedBags) { + if (bag.size - 1 > width) { + width = bag.size - 1; + } + } + + for (Separator separator: separators) { + if (separator.size > separatorWidth) { + separatorWidth = separator.size; + } + } + + if (separatorWidth > width) { + width = separatorWidth; + } + } + + public void flatten() { + if (nestedBags == null) { + return; + } + + validate(); + for (Bag bag: nestedBags) { + if (bag.nestedBags != null) { + bag.flatten(); + } + } + validate(); + ArrayList newSeparatorList = new ArrayList<>(); + for (Separator separator: separators) { + // System.out.println(separator.incidentBags.size() + " incident bags of " + + // separator); + ArrayList newIncidentBags = new ArrayList<>(); + for (Bag bag: separator.incidentBags) { + if (bag.parent == this && bag.nestedBags != null && + !bag.nestedBags.isEmpty()) { + Bag nested = bag.findNestedBagContaining( + convert(separator.vertexSet, bag.conv)); + nested.addIncidentSeparator(separator); + if (nested == null) { + bag.dump(); + System.out.println(" does not have a bag containing " + + convert(separator.vertexSet, bag.conv) + " which is originally " + + separator.vertexSet); + this.dump(); + } + + newIncidentBags.add(nested); + + } + else { + newIncidentBags.add(bag); + } + } + if (!newIncidentBags.isEmpty()) { + separator.incidentBags = newIncidentBags; + newSeparatorList.add(separator); + } + // System.out.println("processed separator :" + separator); + } + separators = newSeparatorList; + + ArrayList temp = nestedBags; + nestedBags = new ArrayList<>(); + for (Bag bag: temp) { + if (bag.nestedBags != null && !bag.nestedBags.isEmpty()) { + for (Bag nested: bag.nestedBags) { + // System.out.println("inverting " + nested); + nested.invert(); + nestedBags.add(nested); + // System.out.println("inverted " + nested); + } + for (Separator separator: bag.separators) { + // System.out.println("inverting sep " + separator); + separator.invert(); + this.separators.add(separator); + // System.out.println("inverted sep " + separator); + } + } + else { + // System.out.println("adding original bag " + bag.vertexSet); + nestedBags.add(bag); + } + } + setWidth(); + // System.out.println("bag of size " + size + " flattened into " + nestedBags.size() + " bags and width " + + // width); + // for (Bag bag: nestedBags) { + // System.out.println("incident separators of " + bag.vertexSet); + // for (Separator s: bag.incidentSeparators) { + // System.out.println(" " + s.vertexSet); + // for (Bag b: s.incidentBags) { + // System.out.println(" " + b.vertexSet); + // } + // } + // } + } + + public Bag findNestedBagContaining(VertexSet vertexSet) { + for (Bag bag: nestedBags) { + if (vertexSet.isSubset(bag.vertexSet)) { + return bag; + } + } + return null; + } + + public void invert() { + vertexSet = convert(vertexSet, parent.inv); + parent = parent.parent; + } + + public void convert() { + vertexSet = convert(vertexSet, parent.conv); + } + + public VertexSet convert(VertexSet s) { + return convert(s, conv); + } + + private VertexSet convert(VertexSet s, int[] conv) { + if (conv.length < s.length()) { + return null; + } + VertexSet result = new VertexSet(); + for (int v = s.nextSetBit(0); v >= 0; + v = s.nextSetBit(v + 1)) { + result.set(conv[v]); + } + return result; + } + + public TreeDecomposition toTreeDecomposition() { + setWidth(); + TreeDecomposition td = new TreeDecomposition(0, width, graph); + for (Bag bag: nestedBags) { + td.addBag(bag.vertexSet.toArray()); + } + + for (Separator separator: separators) { + VertexSet vs = separator.vertexSet; + Bag full = null; + for (Bag bag: separator.incidentBags) { + if (vs.isSubset(bag.vertexSet)) { + full = bag; + break; + } + } + + if (full != null) { + int j = nestedBags.indexOf(full) + 1; + for (Bag bag: separator.incidentBags) { + + if (bag != full) { + td.addEdge(j, nestedBags.indexOf(bag) + 1); + } + } + } + else { + int j = td.addBag(separator.vertexSet.toArray()); + for (Bag bag: separator.incidentBags) { + td.addEdge(j, nestedBags.indexOf(bag) + 1); + } + } + } + + return td; + } + + public void detectSafeSeparators() { + for (Separator separator: separators) { + separator.figureOutSafetyBySPT(); + } + } + + public long detectSafeSeparators(long timeMS) { + long sum = 0; + for (Separator separator: separators) { + if(sum > timeMS){ + return sum; + } + separator.figureOutSafetyBySPT(); + sum += separator.getSteps() * graph.n / 10000; + separator.safeSteps = 1; + } + return sum; + } + + public Separator choiceWall(int k){ + if(separators == null){ + return null; + } + Separator separator = null; + // greedy + for(Separator s : separators){ + if(separator == null || + s.size < separator.size){ + separator = s; + } + } + if(separator != null && separator.size < k){ + separator.wall = true; + return separator; + } + return null; + } + + public void pack() { + ArrayList newBagList = new ArrayList<>(); + for (Bag bag: nestedBags) { + if (bag.parent == this) { + ArrayList bagsToPack = new ArrayList<>(); + bag.collectBagsToPack(bagsToPack, null); + // System.out.println("bags to pack: " + bagsToPack); + if (bagsToPack.size() >= 2) { + VertexSet vertexSet = new VertexSet(graph.n); + for (Bag toPack: bagsToPack) { + vertexSet.or(toPack.vertexSet); + } + Bag packed = new Bag(this, vertexSet); + packed.initializeForDecomposition(); + packed.nestedBags = bagsToPack; + for (Bag toPack: bagsToPack) { + toPack.parent = packed; + toPack.convert(); + } + newBagList.add(packed); + } + else { + newBagList.add(bag); + } + } + } + nestedBags = newBagList; + + ArrayList newSeparatorList = new ArrayList<>(); + + for (Separator separator: separators) { + boolean internal = true; + Bag parent = null; + for (Bag b: separator.incidentBags) { + if (b.parent == this) { + internal = false; + break; + } + else if (parent == null) { + parent = b.parent; + } + else if (b.parent != parent) { + internal = false; + break; + } + } + if (internal) { + separator.parent = parent; + separator.convert(); + parent.separators.add(separator); + } + else { + ArrayList newIncidentBags = new ArrayList<>(); + for (Bag b: separator.incidentBags) { + if (b.parent == this) { + newIncidentBags.add(b); + } + else { + newIncidentBags.add(b.parent); + b.parent.incidentSeparators.add(separator); + b.incidentSeparators.remove(separator); + } + } + separator.incidentBags = newIncidentBags; + newSeparatorList.add(separator); + } + } + + separators = newSeparatorList; + + for (Bag bag: nestedBags) { + bag.setWidth(); + } + setWidth(); + } + + void collectBagsToPack(ArrayList list, Separator from) { + list.add(this); + for (Separator separator: incidentSeparators) { + // System.out.println(" safe = " + separator.safe); + if (separator == from || separator.safe || separator.wall) { + continue; + } + separator.collectBagsToPack(list, this); + } + } + + public int countSafeSeparators() { + int count = 0; + for (Separator separator: separators) { + if (separator.safe) { + count++; + } + } + return count; + } + + public void dump() { + dump(""); + } + + public void validate() { + if(!DEBUG){ + return; + } + + if (nestedBags != null) { + // assert !nestedBags.isEmpty() : "no nested bags " + this; + for (Bag b: nestedBags) { + b.validate(); + assert !b.vertexSet.isEmpty(): "empty bag " + b; + assert b.parent == this: "parent of " + b + + "\n which is " + b.parent + + "\n is supposed to be " + this; + } + for (Separator s: separators) { + assert !s.vertexSet.isEmpty(): "empty seprator " + s; + assert s.parent == this: "parent of " + s + + "\n which is " + s.parent + + "\n is supposed to be " + this; + } + for (Bag b: nestedBags) { + for (Separator s: b.incidentSeparators) { + assert !s.vertexSet.isEmpty(): "empty seprator " + s + + "\n incident to " + b; + assert s.parent == this: "parent of " + s + + "\n which is " + s.parent + + "\n is supposed to be " + this + + "\n where the separator is incident to bag " + b; + assert s.vertexSet.isSubset(b.vertexSet): "separator vertex set " + s.vertexSet + + "\n is not a subset of the bag vertex set " + b.vertexSet; + } + } + for (Separator separator: separators) { + for (Bag b: separator.incidentBags) { + assert b != null; + assert b.parent == this: "parent of " + b + + "\n which is " + b.parent + + "\n is supposed to be " + this + + "\n where the bag is incident to separator " + separator; + assert separator.vertexSet.isSubset(b.vertexSet): "separator vertex set " + + separator.vertexSet + + "\n is not a subset of the bag vertex set " + b.vertexSet; + } + } + } + } + + private void dump(String indent) { + System.out.println(indent + "bag:" + vertexSet); + System.out.print(indent + "width = " + width + ", conv = "); + System.out.println(Arrays.toString(conv)); + if (nestedBags != null) { + System.out.println(indent + nestedBags.size() + " subbags:"); + for (Bag bag: nestedBags) { + bag.dump(indent + " "); + } + for (Separator separator: separators) { + separator.dump(indent + " "); + } + } + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + if (parent != null) { + sb.append("bag" + parent.nestedBags.indexOf(this) + ":"); + } + else { + sb.append("root bag :"); + } + sb.append(vertexSet); + return sb.toString(); + } + + @Override + public Bag clone(){ + try{ + Bag result = (Bag)super.clone(); + + if(conv != null){ + result.conv = Arrays.copyOf(conv, conv.length); + } + if(inv != null){ + result.inv = Arrays.copyOf(inv, inv.length); + } + + Map< Bag, Bag > newBagOf = new HashMap< >(); + if(nestedBags != null){ + result.nestedBags = new ArrayList< >(nestedBags.size()); + for(Bag b : nestedBags){ + Bag cb = (Bag)b.clone(); + cb.parent = result; + result.nestedBags.add(cb); + newBagOf.put(b, cb); + } + } + + Map< Separator, Separator > newSeparatorOf = new HashMap< >(); + if(separators != null){ + result.separators = new ArrayList< >(separators.size()); + for(Separator s : separators){ + Separator cs = (Separator)s.clone(); + cs.parent = result; + result.separators.add(cs); + newSeparatorOf.put(s, cs); + } + } + + if(incidentSeparators != null){ + result.incidentSeparators = new ArrayList< >(incidentSeparators); + } + + if(nestedBags != null){ + for(Bag b : result.nestedBags){ + ArrayList< Separator > newIncidentSeparatorList = + new ArrayList< >(b.incidentSeparators.size()); + for(Separator s : b.incidentSeparators){ + newIncidentSeparatorList.add(newSeparatorOf.get(s)); + } + b.incidentSeparators = newIncidentSeparatorList; + } + } + + if(separators != null){ + for(Separator s : result.separators){ + ArrayList< Bag > newIncidentBagList = + new ArrayList< >(s.incidentBags.size()); + for(Bag b : s.incidentBags){ + newIncidentBagList.add(newBagOf.get(b)); + } + s.incidentBags = newIncidentBagList; + } + } + + return result; + } + catch(CloneNotSupportedException cnse){ + throw new AssertionError(); + } + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/BlockSieve.java b/solvers/TCS-Meiji/tw/heuristic/BlockSieve.java new file mode 100644 index 0000000..a1609f5 --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/BlockSieve.java @@ -0,0 +1,866 @@ +/* + * Copyright (c) 2017, Hisao Tamaki and Hiromu Otsuka +*/ + +package tw.heuristic; + +import java.io.PrintStream; + +import java.util.ArrayList; +import java.util.Arrays; + +public class BlockSieve{ + private static final String spaces64 = + " "; + public static final int MAX_CHILDREN_SIZE = 512; + private Node root; + private int n; + private int last; + private int targetWidth; + private int margin; + private int size; + + private abstract class Node{ + protected int index; + protected int width; + protected int ntz; + protected Node[] children; + protected VertexSet[] values; + protected int[] cardinalities; + + protected Node(int index, int width, int ntz){ + this.index = index; + this.width = width; + this.ntz = ntz; + } + + public abstract int indexOf(long label); + public abstract int add(long label); + public abstract int size(); + public abstract long getLabelAt(int i); + + public long getMask(){ + return Unsigned.consecutiveOneBit(ntz, ntz + width); + } + + public int add(long label, Node child){ + int i = add(label); + if(children != null){ + children = Arrays.copyOf(children, children.length + 1); + for(int j = children.length - 1; j - 1 >= i; j--){ + children[j] = children[j - 1]; + } + children[i] = child; + return i; + } + else{ + children = new Node[1]; + children[0] = child; + return 0; + } + } + + public int add(long label, VertexSet value){ + return add(label, value, value.cardinality()); + } + + public int add(long label, VertexSet value, int cardinality){ + int i = add(label); + if(values != null && cardinalities != null){ + values = Arrays.copyOf(values, values.length + 1); + for(int j = values.length - 1; j - 1 >= i; j--){ + values[j] = values[j - 1]; + } + values[i] = value; + cardinalities = + Arrays.copyOf(cardinalities, cardinalities.length + 1); + for(int j = cardinalities.length - 1; j - 1 >= i; j--){ + cardinalities[j] = cardinalities[j - 1]; + } + cardinalities[i] = cardinality; + return i; + } + else{ + values = new VertexSet[1]; + values[0] = value; + cardinalities = new int[1]; + cardinalities[0] = cardinality; + return 0; + } + } + + public boolean isLeaf(){ + return index == last && isLastInInterval(); + } + + public boolean isLastInInterval(){ + return ntz + width == 64; + } + + public abstract void filterSuperblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< VertexSet > list); + + public abstract void filterSubblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< VertexSet > list); + + public abstract void dump(PrintStream ps, String indent); + } + + private class ByteNode extends Node{ + private byte[] labels; + + public ByteNode(int index, int width, int ntz){ + super(index, width, ntz); + assert(width <= 8); + labels = new byte[0]; + } + + @Override + public int size(){ + return labels.length; + } + + @Override + public long getLabelAt(int i){ + return Unsigned.toUnsignedLong(labels[i]) << ntz; + } + + @Override + public int indexOf(long label){ + return Unsigned.binarySearch(labels, + Unsigned.byteValue((label & getMask()) >>> ntz)); + } + + @Override + public int add(long label){ + int i = -(indexOf(label)) - 1; + labels = Arrays.copyOf(labels, labels.length + 1); + for(int j = labels.length - 1; j - 1 >= i; j--){ + labels[j] = labels[j - 1]; + } + labels[i] = Unsigned.byteValue((label & getMask()) >>> ntz); + return i; + } + + @Override + public void dump(PrintStream ps, String indent){ + for(int i = 0; i < labels.length; i++){ + ps.println(indent + Long.toBinaryString(labels[i])); + if(!isLeaf()){ + children[i].dump(ps, indent + spaces64); + } + } + } + + @Override + public void filterSuperblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< VertexSet > list){ + long mask = getMask(); + int bits = 0; + if(index < longs.length){ + bits = Unsigned.intValue(((longs[index] & mask) >>> ntz)); + } + + int neighb = 0; + if(index < neighbors.length){ + neighb = Unsigned.intValue((neighbors[index] & mask) >>> ntz); + } + + if(isLeaf()){ + for(int i = labels.length - 1; i >= 0; i--){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = labels.length - 1; i >= 0; i--){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSuperblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + + @Override + public void filterSubblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< VertexSet > list) { + long mask = getMask(); + int bits = 0; + if(index < longs.length){ + bits = Unsigned.intValue((longs[index] & mask) >>> ntz); + } + + int neighb = 0; + if(index < neighbors.length){ + neighb = Unsigned.intValue((neighbors[index] & mask) >>> ntz); + } + + if(isLeaf()){ + for(int i = 0; i < labels.length; i++){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = 0; i < labels.length; i++){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSubblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + } + + private class ShortNode extends Node{ + private short[] labels; + + public ShortNode(int index, int width, int ntz){ + super(index, width, ntz); + assert(width <= 16); + labels = new short[0]; + } + + @Override + public int size(){ + return labels.length; + } + + @Override + public long getLabelAt(int i){ + return Unsigned.toUnsignedLong(labels[i]) << ntz; + } + + @Override + public int indexOf(long label){ + return Unsigned.binarySearch(labels, + Unsigned.shortValue((label & getMask()) >>> ntz)); + } + + @Override + public int add(long label){ + int i = -(indexOf(label)) - 1; + labels = Arrays.copyOf(labels, labels.length + 1); + for(int j = labels.length - 1; j - 1 >= i; j--){ + labels[j] = labels[j - 1]; + } + labels[i] = Unsigned.shortValue(((label & getMask()) >>> ntz)); + return i; + } + + @Override + public void dump(PrintStream ps, String indent){ + for (int i = 0; i < labels.length; i++) { + ps.println(indent + Long.toBinaryString(labels[i])); + if (!isLeaf()) { + children[i].dump(ps, indent + spaces64); + } + } + } + + @Override + public void filterSuperblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< VertexSet > list){ + long mask = getMask(); + int bits = 0; + if(index < longs.length){ + bits = Unsigned.intValue((longs[index] & mask) >>> ntz); + } + + int neighb = 0; + if(index < neighbors.length){ + neighb = Unsigned.intValue((neighbors[index] & mask) >>> ntz); + } + + if(isLeaf()){ + for(int i = labels.length - 1; i >= 0; i--){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = labels.length - 1; i >= 0; i--){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSuperblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + + @Override + public void filterSubblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< VertexSet > list) { + long mask = getMask(); + int bits = 0; + if(index < longs.length){ + bits = Unsigned.intValue((longs[index] & mask) >>> ntz); + } + + int neighb = 0; + if(index < neighbors.length){ + neighb = Unsigned.intValue((neighbors[index] & mask) >>> ntz); + } + + if(isLeaf()){ + for(int i = 0; i < labels.length; i++){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = 0; i < labels.length; i++){ + int label = Unsigned.toUnsignedInt(labels[i]); + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSubblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + } + + private class IntegerNode extends Node{ + private int[] labels; + + public IntegerNode(int index, int width, int ntz){ + super(index, width, ntz); + assert(width <= 32); + labels = new int[0]; + } + + @Override + public int size(){ + return labels.length; + } + + @Override + public long getLabelAt(int i){ + return Integer.toUnsignedLong(labels[i]) << ntz; + } + + @Override + public int indexOf(long label){ + return Unsigned.binarySearch(labels, + Unsigned.intValue((label & getMask()) >>> ntz)); + } + + @Override + public int add(long label){ + int i = -(indexOf(label)) - 1; + labels = Arrays.copyOf(labels, labels.length + 1); + for(int j = labels.length - 1; j - 1 >= i; j--){ + labels[j] = labels[j - 1]; + } + labels[i] = Unsigned.intValue((label & getMask()) >>> ntz); + return i; + } + + @Override + public void dump(PrintStream ps, String indent){ + for (int i = 0; i < labels.length; i++) { + ps.println(indent + Long.toBinaryString(labels[i])); + if (!isLeaf()) { + children[i].dump(ps, indent + spaces64); + } + } + } + + @Override + public void filterSuperblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< VertexSet > list){ + long mask = getMask(); + int bits = 0; + if(index < longs.length){ + bits = Unsigned.intValue((longs[index] & mask) >>> ntz); + } + + int neighb = 0; + if(index < neighbors.length){ + neighb = Unsigned.intValue((neighbors[index] & mask) >>> ntz); + } + + if(isLeaf()){ + for(int i = labels.length - 1; i >= 0; i--){ + int label = labels[i]; + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = labels.length - 1; i >= 0; i--){ + int label = labels[i]; + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSuperblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + + @Override + public void filterSubblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< VertexSet > list) { + long mask = getMask(); + int bits = 0; + if(index < longs.length){ + bits = Unsigned.intValue((longs[index] & mask) >>> ntz); + } + + int neighb = 0; + if(index < neighbors.length){ + neighb = Unsigned.intValue((neighbors[index] & mask) >>> ntz); + } + + if(isLeaf()){ + for(int i = 0; i < labels.length; i++){ + int label = labels[i]; + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = 0; i < labels.length; i++){ + int label = labels[i]; + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Integer.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSubblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + } + + private class LongNode extends Node{ + private long[] labels; + + public LongNode(int index){ + this(index, 64, 0); + } + + private LongNode(int index, int width, int ntz){ + super(index, width, ntz); + assert(width <= 64); + labels = new long[0]; + } + + @Override + public int size(){ + return labels.length; + } + + @Override + public long getLabelAt(int i){ + return labels[i] << ntz; + } + + @Override + public int indexOf(long label){ + return Unsigned.binarySearch(labels, (label & getMask()) >>> ntz); + } + + @Override + public int add(long label){ + int i = -(indexOf(label)) - 1; + labels = Arrays.copyOf(labels, labels.length + 1); + for(int j = labels.length - 1; j - 1 >= i; j--){ + labels[j] = labels[j - 1]; + } + labels[i] = ((label & getMask()) >>> ntz); + return i; + } + + @Override + public void dump(PrintStream ps, String indent){ + for(int i = 0; i < labels.length; i++){ + ps.println(indent + Long.toBinaryString(labels[i])); + if(!isLeaf()){ + children[i].dump(ps, indent + spaces64); + } + } + } + + @Override + public void filterSuperblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< VertexSet > list){ + long mask = getMask(); + long bits = 0; + if(index < longs.length){ + bits = (longs[index] & mask) >>> ntz; + } + + long neighb = 0; + if(index < neighbors.length){ + neighb = (neighbors[index] & mask) >>> ntz; + } + + if(isLeaf()){ + for(int i = labels.length - 1; i >= 0; i--){ + long label = labels[i]; + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Long.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = labels.length - 1; i >= 0; i--){ + long label = labels[i]; + if(Unsigned.compare(bits, label) > 0){ + break; + } + if((bits & ~label) == 0){ + int intersects1 = intersects + + Long.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSuperblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + + @Override + public void filterSubblocks(long[] longs, long[] neighbors, + int intersects, ArrayList< VertexSet > list) { + long mask = getMask(); + long bits = 0; + if(index < longs.length){ + bits = (longs[index] & mask) >>> ntz; + } + + long neighb = 0; + if(index < neighbors.length){ + neighb = (neighbors[index] & mask) >>> ntz; + } + + if(isLeaf()){ + for(int i = 0; i < labels.length; i++){ + long label = labels[i]; + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Long.bitCount(label & neighb); + if(intersects1 + cardinalities[i] + <= targetWidth + 1){ + list.add(values[i]); + } + } + } + } + else{ + for(int i = 0; i < labels.length; i++){ + long label = labels[i]; + if(Unsigned.compare(bits, label) < 0){ + break; + } + if((~bits & label) == 0){ + int intersects1 = intersects + + Long.bitCount(label & neighb); + if(intersects1 <= margin){ + children[i].filterSubblocks( + longs, neighbors, intersects1, list); + } + } + } + } + } + } + + public BlockSieve(int n, int targetWidth, int margin){ + this.n = n; + this.targetWidth = targetWidth; + this.margin = margin; + root = new LongNode(0); + last = (n - 1) / 64; + } + + public VertexSet put(VertexSet bs, VertexSet value){ + long longs[] = bs.toLongArray(); + Node node = root, parent = null; + + int i = 0, j1 = 0; + long bits = 0; + for(;;){ + bits = 0; + if(i < longs.length){ + bits = longs[i]; + } + int j = node.indexOf(bits); + if(j < 0){ + break; + } + if(node.isLeaf()){ + return node.values[j]; + } + parent = node; + node = node.children[j]; + i = node.index; + j1 = j; + } + + if(node.isLeaf()){ + node.add(bits, value); + } + else if(node.isLastInInterval()){ + node.add(bits, newPath(i + 1, longs, value)); + } + else{ + Node header = newNode( + i, 64 - (node.ntz + node.width), node.ntz + node.width); + if(!header.isLeaf()){ + header.add(bits, newPath(i + 1, longs, value)); + } + else{ + header.add(bits, value); + } + node.add(bits, header); + } + + ++size; + if(parent != null){ + parent.children[j1] = tryWidthResizing(node); + } + else{ + root = tryWidthResizing(node); + } + + return null; + } + + private Node tryWidthResizing(Node node){ + if(node.size() > MAX_CHILDREN_SIZE){ + Node node1 = resizeWidth(node); + for(int i = 0; i < node1.children.length; i++){ + node1.children[i] = tryWidthResizing(node1.children[i]); + } + return node1; + } + return node; + } + + private Node resizeWidth(Node node){ + int w = node.width, leng = node.size(); + long m = node.getMask(); + + long[] l = new long[leng]; + int ntz = Long.numberOfTrailingZeros(m); + int t = ntz + node.width; + while(l.length > MAX_CHILDREN_SIZE){ + t = (ntz + t) / 2; + m = Unsigned.consecutiveOneBit(ntz, t); + int p = 0; + for(int i = 0; i < leng; i++){ + long label = ((node.getLabelAt(i) & m) >>> ntz); + int j = Unsigned.binarySearch(l, 0, p, label); + if(j < 0){ + j = -j - 1; + for(int k = p; k - 1 >= j; k--){ + l[k] = l[k - 1]; + } + l[j] = label; + ++p; + } + } + l = Arrays.copyOfRange(l, 0, p); + } + + Node[] c = new Node[l.length]; + for(int i = 0; i < c.length; i++){ + long msk = node.getMask() & ~m; + c[i] = newNode(node.index, + Long.bitCount(msk), Long.numberOfTrailingZeros(msk)); + } + + for(int i = 0; i < leng; i++){ + long label = node.getLabelAt(i); + int j = Unsigned.binarySearch(l, ((label & m) >>> ntz)); + if(!node.isLeaf()){ + c[j].add(label, node.children[i]); + } + else{ + c[j].add(label, + node.values[i], node.cardinalities[i]); + } + } + + Node n1 = newNode(node.index, + Long.bitCount(m), Long.numberOfTrailingZeros(m)); + + for(int i = 0; i < l.length; i++){ + n1.add(l[i] << ntz, c[i]); + } + + return n1; + } + + private Node newNode(int index, int width, int ntz){ + if(width > 32){ + return new LongNode(index, width, ntz); + } + else if(width > 16){ + return new IntegerNode(index, width, ntz); + } + else if(width > 8){ + return new ShortNode(index, width, ntz); + } + else{ + return new ByteNode(index, width, ntz); + } + } + + private Node newPath(int index, long[] longs, VertexSet value){ + Node node = new LongNode(index); + + long bits = 0; + if(index < longs.length){ + bits = longs[index]; + } + + if(index == last){ + node.add(bits, value); + } + else{ + node.add(bits, newPath(index + 1, longs, value)); + } + + return node; + } + + public void collectSuperblocks( + VertexSet component, VertexSet neighbors, ArrayList< VertexSet > list){ + root.filterSuperblocks(component.toLongArray(), + neighbors.toLongArray(), 0, list); + } + + public void collectSubblocks( + VertexSet component, VertexSet neighbors, ArrayList< VertexSet > list){ + root.filterSubblocks(component.toLongArray(), + neighbors.toLongArray(), 0, list); + } + + public int size(){ + return size; + } + + public void dump(PrintStream ps){ + root.dump(ps, ""); + } + + public static void main(String args[]){ + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/CPUTimer.java b/solvers/TCS-Meiji/tw/heuristic/CPUTimer.java new file mode 100644 index 0000000..6755747 --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/CPUTimer.java @@ -0,0 +1,58 @@ +/* + * Copyright (c) 2017, Hisao Tamaki +*/ + +package tw.heuristic; + +import java.lang.management.GarbageCollectorMXBean; +import java.lang.management.ManagementFactory; +import java.lang.management.ThreadMXBean; + +public class CPUTimer { + private long threadTimeStart; + private long gcTimeStart; + private int timeout; + + ThreadMXBean threadMXBean; + public CPUTimer() { + threadMXBean = ManagementFactory.getThreadMXBean(); + threadTimeStart = threadMXBean.getCurrentThreadCpuTime(); + gcTimeStart = gcTime(); + } + + public void setTimeout(int timeout) { + this.timeout = timeout; + } + + public long getThreadTime() { + long ct = threadMXBean.getCurrentThreadCpuTime(); + return (ct - threadTimeStart) / 1000000; + } + + public long getGCTime() { + return gcTime() - gcTimeStart; + } + + public long getTime() { + return getThreadTime() + getGCTime(); + } + + private long gcTime() { + long gcTime = 0; + + for(GarbageCollectorMXBean gc : + ManagementFactory.getGarbageCollectorMXBeans()) { + + long time = gc.getCollectionTime(); + + if(time >= 0) { + gcTime += time; + } + } + return gcTime; + } + + public boolean hasTimedOut() { + return getTime() >= ((long) timeout) * 1000; + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/CutDecomposer.java b/solvers/TCS-Meiji/tw/heuristic/CutDecomposer.java new file mode 100644 index 0000000..a445d7f --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/CutDecomposer.java @@ -0,0 +1,437 @@ +/* + * Copyright (c) 2017, Keitaro Makii and Hiromu Ohtsuka +*/ + +package tw.heuristic; + +import java.util.ArrayList; +import java.util.BitSet; + +public class CutDecomposer{ + public static final int LN = 2000; + public static final int HN = 100000; + public static final int HM = 1000000; + public static final int ONET = 400000; + public static final int STEP = 1000; + public static final long DEFAULTMAXSTEP = 500000; + public static int now; + public static int cu; + public static int compSize; + public static long count; + public static boolean abort; + private Bag whole; + + private static final boolean DEBUG = false; + + private class CutDivide{ + Separator sep; + VertexSet c1,c2; + CutDivide(Separator s,VertexSet a,VertexSet b){ + sep = s; + c1 = a; + c2 = b; + } + } + private class NextBag{ + Bag bag; + int start; + NextBag(Bag b,int s){ + bag = b; + start = s; + } + } + + public CutDecomposer(Bag whole){ + this.whole = whole; + } + + public void decompose(){ + decompose(DEFAULTMAXSTEP); + } + + public boolean decompose(long timeMS){ + abort = false; + count = 0; + if(whole.graph.n > ONET){ + return true; + } + + decomposeWithOneCuts(); + if(getTimeMS() > timeMS){ + whole.flatten(); + whole.setWidth(); + abort = true; + return false; + } + whole.flatten(); + + if(whole.graph.n <= LN){ + decomposeWithTwoCuts(); + if(getTimeMS() > timeMS){ + whole.flatten(); + whole.setWidth(); + abort = true; + return false; + } + } + else if(whole.graph.n <= HN && whole.graph.numberOfEdges() <= HM){ + if(!decomposeWithSmallCuts(2,timeMS)){ + whole.flatten(); + whole.setWidth(); + abort = true; + return false; + } + } + + if(whole.graph.n <= 30000){ + whole.flatten(); + if(!decomposeWithSmallCuts(3,timeMS)){ + whole.flatten(); + whole.setWidth(); + abort = true; + return false; + } + } + if(whole.graph.n <= 20000){ + whole.flatten(); + if(!decomposeWithSmallCuts(4,timeMS)){ + whole.flatten(); + whole.setWidth(); + abort = true; + return false; + } + } + + + whole.flatten(); + whole.setWidth(); + + return true; + } + + private static void comment(String comment){ + System.out.println("c " + comment); + } + + private void decomposeWithOneCuts(){ + VertexSet articulationSet = new VertexSet(); + ArrayList< VertexSet > bcc = + whole.graph.getBiconnectedComponents(articulationSet); + + count += (whole.graph.n + whole.graph.numberOfEdges()); + + if(articulationSet.isEmpty()){ + return; + } + + if(DEBUG){ + comment("detected 1-cuts"); + } + + whole.initializeForDecomposition(); + + for(int a = articulationSet.nextSetBit(0); + a >= 0; a = articulationSet.nextSetBit(a + 1)){ + count++; + Separator s = whole.addSeparator(new VertexSet(new int[]{a})); + s.safe = true; + } + + for(VertexSet bc : bcc){ + count++; + whole.addNestedBag(bc); + } + + for(Separator s : whole.separators){ + for(Bag b : whole.nestedBags){ + count++; + if(s.vertexSet.isSubset(b.vertexSet)){ + b.addIncidentSeparator(s); + s.addIncidentBag(b); + } + } + } + + if(DEBUG){ + comment("decomposes with 1-cuts"); + comment("1-cutsSize:" + articulationSet.cardinality()); + } + + return; + } + + private boolean decomposeWithSmallCuts(int c,long timeMS){ + if(whole.nestedBags != null && !whole.nestedBags.isEmpty()){ + for(Bag nb : whole.nestedBags){ + if(!decomposeWithSmallCuts(nb,c,timeMS)){ + return false; + } + } + } + else{ + if(!decomposeWithSmallCuts(whole,c,timeMS)){ + return false; + } + } + if(DEBUG){ + comment("decompose with small-cuts"); + } + return true; + } + + private boolean decomposeWithSmallCuts(Bag bag,int c,long timeMS){ + if(bag != whole){ + bag.makeLocalGraph(); + count += bag.graph.n * (Math.log(bag.graph.n)+1) + bag.graph.numberOfEdges() * 1.2; + } + Graph lg = bag.graph; + + cu = c; + compSize = 6+cu; + + NextBag nb = new NextBag(bag,0); + + while(true){ + nb = decomposeWithSmallCuts(nb.bag,nb.start,lg.n); + if(getTimeMS() > timeMS){ + return false; + } + if (nb == null){ + return true; + } + nb.bag.makeLocalGraph(); + lg = nb.bag.graph; + count += nb.bag.graph.n * (Math.log(nb.bag.graph.n)+1) + nb.bag.graph.numberOfEdges() * 1.2; + + compSize = 6+cu; + } + } + + private NextBag decomposeWithSmallCuts(Bag bag,int start,int end){ + for(int i=start;i compSize || bag.graph.n <= (addSize + candSize)){ + return null; + } + if(left.isEmpty()){ + bag.initializeForDecomposition(); + Separator sep = bag.addSeparator(cand); + sep.figureOutSafetyBySPT(); + count += sep.getSteps() + bag.graph.n / 15; + if(sep.safe){ + count += bag.graph.n; + VertexSet big = bag.graph.all.clone(); + big.andNot(comp); + comp.or(cand); + return new CutDivide(sep,comp,big); + } + else{ + if(bag == whole){ + bag.nestedBags.clear(); + bag.separators.remove(sep); + } + else{ + bag.nestedBags = null; + bag.separators = null; + } + } + return null; + } + + int next = left.nextSetBit(0); + if(next == -1){ + return null; + } + if(candSize < cu){ + count++; + cand.set(next); + left.clear(next); + CutDivide cd = decomposeWithSmallCuts(bag,comp,cand,left); + if(cd != null){ + return cd; + } + cand.clear(next); + left.set(next); + } + if(next < now){ + return null; + } + + count++; + comp.set(next); + left = bag.graph.neighborSet(comp); + left = left.subtract(cand); + count += (bag.graph.n / (Math.log(bag.graph.n)+1)); + CutDivide cd = decomposeWithSmallCuts(bag,comp,cand,left); + if(cd != null){ + return cd; + } + return null; + } + + private void decomposeWithTwoCuts(){ + if(whole.nestedBags != null && !whole.nestedBags.isEmpty()){ + for(Bag nb : whole.nestedBags){ + nb.makeLocalGraph(); + count += nb.graph.n; + decomposeWithTwoCuts(nb); + } + } + else{ + decomposeWithTwoCuts(whole); + } + if(DEBUG){ + comment("decomposed with 2-cuts"); + } + } + + private void decomposeWithTwoCuts(Bag parent){ + ArrayList art = new ArrayList(); + Graph lg = parent.graph; + if(lg.n <= 1){ + return; + } +// count += lg.n * Math.log(lg.n) + lg.numberOfEdges(); + count += lg.n * lg.n / Math.log(lg.n) + lg.numberOfEdges(); + for(int i=0;i comp = lg.getComponents(sep); + Separator s = parent.addSeparator(sep); + s.safe = true; + count += lg.n; + + art.remove(0); + + for(VertexSet ver:comp){ + ver.or(sep); + Bag b = parent.addNestedBag(ver); + b.initializeForDecomposition(); + b.addIncidentSeparator(s); + s.addIncidentBag(b); + b.makeLocalGraph(); + + ArrayList nextart = new ArrayList(); + for(VertexSet oldart:art){ + count++; + if(oldart.isSubset(ver)){ + count++; + VertexSet na = new VertexSet(); + for(int i=oldart.nextSetBit(0);i!=-1;i=oldart.nextSetBit(i+1)){ + na.set(b.conv[i]); + } + nextart.add(na); + } + } + decomposeWithTwoCuts(b,nextart); + } + } + + private void decomposeWithTwoCuts(Bag parent,ArrayList art){ + if(art.size() == 0){ + count++; + if(DEBUG){ + parent.validate(); + } + parent.nestedBags = null; + parent.separators = null; + return; + } + + VertexSet sep = art.get(0); + ArrayList comp = parent.graph.getComponents(sep); + count += parent.graph.n; + Separator s = parent.addSeparator(sep); + s.safe = true; + + art.remove(0); + + for(VertexSet ver:comp){ + ver.or(sep);; + Bag b = parent.addNestedBag(ver); + b.initializeForDecomposition(); + b.addIncidentSeparator(s); + s.addIncidentBag(b); + b.makeLocalGraph(); +// count += Math.log(parent.graph.n); + ArrayList nextart = new ArrayList(); + for(VertexSet oldart:art){ + count++; + if(oldart.isSubset(ver)){ + count++; + VertexSet na = new VertexSet(); + for(int i=oldart.nextSetBit(0);i!=-1;i=oldart.nextSetBit(i+1)){ + na.set(b.conv[i]); + } + nextart.add(na); + } + } + decomposeWithTwoCuts(b,nextart); + } + } + + public long getTimeMS(){ + return count/1000; + } + + public boolean isAborted(){ + return abort; + } + + public static void main(String[] args){ + + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/Graph.java b/solvers/TCS-Meiji/tw/heuristic/Graph.java new file mode 100644 index 0000000..5d6d5c9 --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/Graph.java @@ -0,0 +1,1058 @@ +/* + * Copyright (c) 2016, Hisao Tamaki and Hiromu Ohtsuka + */ + +package tw.heuristic; + +import java.io.BufferedReader; +import java.io.File; +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.FileReader; +import java.io.IOException; +import java.io.PrintStream; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.BitSet; +import java.util.Random; +import java.util.Stack; + +/** + * This class provides a representation of undirected simple graphs. + * The vertices are identified by non-negative integers + * smaller than {@code n} where {@code n} is the number + * of vertices of the graph. + * The degree (the number of adjacent vertices) of each vertex + * is stored in an array {@code degree} indexed by the vertex number + * and the adjacency lists of each vertex + * is also referenced from an array {@code neighbor} indexed by + * the vertex number. These arrays as well as the int variable {@code n} + * are public to allow easy access to the graph content. + * Reading from and writing to files as well as some basic + * graph algorithms, such as decomposition into connected components, + * are provided. + * + * @author Hisao Tamaki + */ +public class Graph { + /** + * number of vertices + */ + public int n; + + /** + * array of vertex degrees + */ + public int[] degree; + + /** + * array of adjacency lists each represented by an integer array + */ + public int[][] neighbor; + + /** + * set representation of the adjacencies. + * {@code neighborSet[v]} is the set of vertices + * adjacent to vertex {@code v} + */ + public VertexSet[] neighborSet; + + /** + * the set of all vertices, represented as an all-one + * bit vector + */ + public VertexSet all; + + /* + * variables used in the DFS aglgorithms fo + * connected componetns and + * biconnected components. + */ + private int nc; + private int mark[]; + private int dfn[]; + private int low[]; + private int dfCount; + private VertexSet articulationSet; + + /** + * Construct a graph with the specified number of + * vertices and no edges. Edges will be added by + * the {@code addEdge} method + * @param n the number of vertices + */ + public Graph(int n) { + this.n = n; + this.degree = new int[n]; + this.neighbor = new int[n][]; + this.neighborSet = new VertexSet[n]; + for (int i = 0; i < n; i++) { + neighborSet[i] = new VertexSet(n); + } + this.all = new VertexSet(n); + for (int i = 0; i < n; i++) { + all.set(i); + } + } + + /** + * Add an edge between two specified vertices. + * This is done by adding each vertex to the adjacent list + * of the other. + * No effect if the specified edge is already present. + * @param u vertex (one end of the edge) + * @param v vertex (the other end of the edge) + */ + public void addEdge(int u, int v) { + addToNeighbors(u, v); + addToNeighbors(v, u); + } + + /** + * Add vertex {@code v} to the adjacency list of {@code u} + * @param u vertex number + * @param v vertex number + */ + private void addToNeighbors(int u, int v) { + if (indexOf(v, neighbor[u]) >= 0) { + return; + } + degree[u]++; + if (neighbor[u] == null) { + neighbor[u] = new int[]{v}; + } + else { + neighbor[u] = Arrays.copyOf(neighbor[u], degree[u]); + neighbor[u][degree[u] - 1] = v; + } + + if (neighborSet[u] == null) { + neighborSet[u] = new VertexSet(n); + } + neighborSet[u].set(v); + + if (neighborSet[v] == null) { + neighborSet[v] = new VertexSet(n); + } + neighborSet[v].set(u); + } + + /** + * Returns the number of edges of this graph + * @return the number of edges + */ + public int numberOfEdges() { + int count = 0; + for (int i = 0; i < n; i++) { + count += degree[i]; + } + return count / 2; + } + + /** + * Inherit edges of the given graph into this graph, + * according to the conversion tables for vertex numbers. + * @param g graph + * @param conv vertex conversion table from the given graph to + * this graph: if {@code v} is a vertex of graph {@code g}, then + * {@code conv[v]} is the corresponding vertex in this graph; + * {@code conv[v] = -1} if {@code v} does not have a corresponding vertex + * in this graph + * @param inv vertex conversion table from this graph to + * the argument graph: if {@code v} is a vertex of this graph, + * then {@code inv[v]} is the corresponding vertex in graph {@code g}; + * it is assumed that {@code v} always have a corresponding vertex in + * graph g. + * + */ + public void inheritEdges(Graph g, int conv[], int inv[]) { + for (int v = 0; v < n; v++) { + int x = inv[v]; + for (int i = 0; i < g.degree[x]; i++) { + int y = g.neighbor[x][i]; + int u = conv[y]; + if (u >= 0) { + addEdge(u, v); + } + } + } + } + + /** + * Read a graph from the specified file in {@code dgf} format and + * return the resulting {@code Graph} object. + * @param path the path of the directory containing the file + * @param name the file name without the extension ".dgf" + * @return the resulting {@code Graph} object; null if the reading fails + */ + public static Graph readGraphDgf(String path, String name) { + File file = new File(path + "/" + name + ".dgf"); + return readGraphDgf(file); + } + + /** + * Read a graph from the specified file in {@code dgf} format and + * return the resulting {@code Graph} object. + * @param file file from which to read + * @return the resulting {@code Graph} object; null if the reading fails + */ + public static Graph readGraphDgf(File file) { + try { + BufferedReader br = new BufferedReader(new FileReader(file)); + String line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + if (line.startsWith("p")) { + String s[] = line.split(" "); + int n = Integer.parseInt(s[2]); + // m is twice the number of edges explicitly listed + int m = Integer.parseInt(s[3]); + Graph g = new Graph(n); + + for (int i = 0; i < m; i++) { + line = br.readLine(); + while (!line.startsWith("e")) { + line = br.readLine(); + } + s = line.split(" "); + int u = Integer.parseInt(s[1]) - 1; + int v = Integer.parseInt(s[2]) - 1; + g.addEdge(u, v); + } + return g; + } + else { + throw new RuntimeException("!!No problem descrioption"); + } + + } catch (FileNotFoundException e) { + e.printStackTrace(); + } catch (IOException e) { + e.printStackTrace(); + } + return null; + } + + /** + * Read a graph from the specified file in {@code col} format and + * return the resulting {@code Graph} object. + * @param path the path of the directory containing the file + * @param name the file name without the extension ".col" + * @return the resulting {@code Graph} object; null if the reading fails + */ + public static Graph readGraphCol(String path, String name) { + File file = new File(path + "/" + name + ".col"); + try { + BufferedReader br = new BufferedReader(new FileReader(file)); + String line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + if (line.startsWith("p")) { + String s[] = line.split(" "); + int n = Integer.parseInt(s[2]); + // m is twice the number of edges in this format + int m = Integer.parseInt(s[3]); + Graph g = new Graph(n); + + for (int i = 0; i < m; i++) { + line = br.readLine(); + while (line != null && !line.startsWith("e")) { + line = br.readLine(); + } + if (line == null) { + break; + } + s = line.split(" "); + int u = Integer.parseInt(s[1]); + int v = Integer.parseInt(s[2]); + g.addEdge(u - 1, v - 1); + } + return g; + } + else { + throw new RuntimeException("!!No problem descrioption"); + } + + } catch (FileNotFoundException e) { + e.printStackTrace(); + } catch (IOException e) { + e.printStackTrace(); + } + return null; + } + + /** + * Read a graph from the specified file in {@code gr} format and + * return the resulting {@code Graph} object. + * The vertex numbers 1~n in the gr file format are + * converted to 0~n-1 in the internal representation. + * @param file graph file in {@code gr} format + * @return the resulting {@code Graph} object; null if the reading fails + */ + public static Graph readGraph(String path, String name) { + File file = new File(path + "/" + name + ".gr"); + return readGraph(file); + } + + /** + * Read a graph from the specified file in {@code gr} format and + * return the resulting {@code Graph} object. + * The vertex numbers 1~n in the gr file format are + * converted to 0~n-1 in the internal representation. + * @param path the path of the directory containing the file + * @param name the file name without the extension ".gr" + * @return the resulting {@code Graph} object; null if the reading fails + */ + public static Graph readGraph(File file) { + try { + BufferedReader br = new BufferedReader(new FileReader(file)); + String line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + if (line.startsWith("p")) { + String s[] = line.split(" "); + if (!s[1].equals("tw")) { + throw new RuntimeException("!!Not treewidth instance"); + } + int n = Integer.parseInt(s[2]); + int m = Integer.parseInt(s[3]); + Graph g = new Graph(n); + + for (int i = 0; i < m; i++) { + line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + s = line.split(" "); + int u = Integer.parseInt(s[0]); + int v = Integer.parseInt(s[1]); + g.addEdge(u - 1, v - 1); + } + return g; + } + else { + throw new RuntimeException("!!No problem descrioption"); + } + + } catch (FileNotFoundException e) { + e.printStackTrace(); + } catch (IOException e) { + e.printStackTrace(); + } + return null; + } + + public static Graph readGraph(InputStream is){ + try { + BufferedReader br = new BufferedReader(new InputStreamReader(is)); + String line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + if (line.startsWith("p")) { + String s[] = line.split(" "); + if (!s[1].equals("tw")) { + throw new RuntimeException("!!Not treewidth instance"); + } + int n = Integer.parseInt(s[2]); + int m = Integer.parseInt(s[3]); + Graph g = new Graph(n); + + for (int i = 0; i < m; i++) { + line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + s = line.split(" "); + int u = Integer.parseInt(s[0]); + int v = Integer.parseInt(s[1]); + g.addEdge(u - 1, v - 1); + } + return g; + } + else { + throw new RuntimeException("!!No problem descrioption"); + } + + } catch (FileNotFoundException e) { + e.printStackTrace(); + } catch (IOException e) { + e.printStackTrace(); + } + return null; + } + + /** + * finds the first occurence of the + * given integer in the given int array + * @param x value to be searched + * @param a array + * @return the smallest {@code i} such that + * {@code a[i]} = {@code x}; + * -1 if no such {@code i} exists + */ + private static int indexOf(int x, int a[]) { + if (a == null) { + return -1; + } + for (int i = 0; i < a.length; i++) { + if (x == a[i]) { + return i; + } + } + return -1; + } + + /** + * returns true if two vetices are adjacent to each other + * in this targat graph + * @param u a vertex + * @param v another vertex + * @return {@code true} if {@code u} is adjcent to {@code v}; + * {@code false} otherwise + */ + public boolean areAdjacent(int u, int v) { + return indexOf(v, neighbor[u]) >= 0; + } + + /** + * returns the minimum degree, the smallest d such that + * there is some vertex {@code v} with {@code degree[v]} = d, + * of this target graph + * @return the minimum degree + */ + public int minDegree() { + if (n == 0) { + return 0; + } + int min = degree[0]; + for (int v = 0; v < n; v++) { + if (degree[v] < min) min = degree[v]; + } + return min; + } + + /** + * Computes the neighbor set for a given set of vertices + * @param set set of vertices + * @return an {@code VertexSet} reprenting the neighbor set of + * the given vertex set + */ + public VertexSet neighborSet(VertexSet set) { + VertexSet result = new VertexSet(n); + for (int v = set.nextSetBit(0); v >= 0; + v = set.nextSetBit(v + 1)) { + result.or(neighborSet[v]); + } + result.andNot(set); + return result; + } + + /** + * Computes the closed neighbor set for a given set of vertices + * @param set set of vertices + * @return an {@code VertexSet} reprenting the closed neighbor set of + * the given vertex set + */ + public VertexSet closedNeighborSet(VertexSet set) { + VertexSet result = (VertexSet) set.clone(); + for (int v = set.nextSetBit(0); v >= 0; + v = set.nextSetBit(v + 1)) { + result.or(neighborSet[v]); + } + return result; + } + + /** + * Compute connected components of this target graph after + * the removal of the vertices in the given separator, + * using Depth-First Search + * @param separator set of vertices to be removed + * @return the arrayList of connected components, + * the vertex set of each component represented by a {@code VertexSet} + */ + public ArrayList getComponentsDFS(VertexSet separator) { + ArrayList result = new ArrayList(); + mark = new int[n]; + for (int v = 0; v < n; v++) { + if (separator.get(v)) { + mark[v] = -1; + } + } + + nc = 0; + + for (int v = 0; v < n; v++) { + if (mark[v] == 0) { + nc++; + markFrom(v); + } + } + + for (int c = 1; c <= nc; c++) { + result.add(new VertexSet(n)); + } + + for (int v = 0; v < n; v++) { + int c = mark[v]; + if (c >= 1) { + result.get(c - 1).set(v); + } + } + return result; + } + + /** + * Recursive method for depth-first search + * vertices reachable from the given vertex, + * passing through only unmarked vertices (vertices + * with the mark[] value being 0 or -1), + * are marked by the value of {@code nc} which + * is a positive integer + * @param v vertex to be visited + */ + private void markFrom(int v) { + if (mark[v] != 0) return; + mark[v] = nc; + for (int i = 0; i < degree[v]; i++) { + int w = neighbor[v][i]; + markFrom(w); + } + } + + /** + * Compute connected components of this target graph after + * the removal of the vertices in the given separator, + * by means of iterated bit operations + * @param separator set of vertices to be removed + * @return the arrayList of connected components, + * the vertex set of each component represented by a {@code VertexSet} + */ + public ArrayList getComponents(VertexSet separator) { + ArrayList result = new ArrayList(); + VertexSet rest = all.subtract(separator); + for (int v = rest.nextSetBit(0); v >= 0; + v = rest.nextSetBit(v + 1)) { + // for (VertexSet found: result) { + // if (found.get(v)) { + // System.err.println(v + " is already in " + found); + // } + // } + VertexSet c = (VertexSet) neighborSet[v].clone(); + VertexSet toBeScanned = c.subtract(separator); + c.set(v); + while (!toBeScanned.isEmpty()) { + VertexSet save = (VertexSet) c.clone(); + for (int w = toBeScanned.nextSetBit(0); w >= 0; + w = toBeScanned.nextSetBit(w + 1)) { + // for (VertexSet found: result) { + // if (found.intersects(neighborSet[w])) { + // System.err.println("the neighborSet of " + w + ": " + + // neighborSet[w] + " intersects " + found); + // } + // } + c.or(neighborSet[w]); + } + toBeScanned = c.subtract(save); + toBeScanned.andNot(separator); + } + result.add(c.subtract(separator)); + rest.andNot(c); + } + + // for (int i = 0; i < result.size(); i++) { + // for (int j = i + 1; j < result.size(); j++) { + // if (result.get(i).intersects(result.get(j))) { + // writeTo(System.err); + // System.err.println(separator); + // checkConsistency(); + // System.err.println(result.get(i).intersectWith(result.get(j))); + // throw new RuntimeException("non-disjoint components " + // + result.get(i) + ", " + result.get(j)); + // } + // } + // } + return result; + } + + /** + * Compute the full components associated with the given separator, + * by means of iterated bit operations + * @param separator set of vertices to be removed + * @return the arrayList of full components, + * the vertex set of each component represented by a {@code VertexSet} + */ + public ArrayList getFullComponents(VertexSet separator) { + ArrayList result = new ArrayList(); + VertexSet rest = all.subtract(separator); + for (int v = rest.nextSetBit(0); v >= 0; + v = rest.nextSetBit(v + 1)) { + VertexSet c = (VertexSet) neighborSet[v].clone(); + VertexSet toBeScanned = c.subtract(separator); + c.set(v); + while (!toBeScanned.isEmpty()) { + VertexSet save = (VertexSet) c.clone(); + for (int w = toBeScanned.nextSetBit(0); w >= 0; + w = toBeScanned.nextSetBit(w + 1)) { + c.or(neighborSet[w]); + } + toBeScanned = c.subtract(save); + toBeScanned.andNot(separator); + } + if (separator.isSubset(c)) { + result.add(c.subtract(separator)); + } + rest.andNot(c); + } + return result; + } + + /** + * Checks if the given induced subgraph of this target graph is connected. + * @param vertices the set of vertices inducing the subraph + * @return {@code true} if the subgrpah is connected; {@code false} otherwise + */ + + public boolean isConnected(VertexSet vertices) { + int v = vertices.nextSetBit(0); + if (v < 0) { + return true; + } + + VertexSet c = (VertexSet) neighborSet[v].clone(); + VertexSet toScan = c.intersectWith(vertices); + c.set(v); + while (!toScan.isEmpty()) { + VertexSet save = (VertexSet) c.clone(); + for (int w = toScan.nextSetBit(0); w >= 0; + w = toScan.nextSetBit(w + 1)) { + c.or(neighborSet[w]); + } + toScan = c.subtract(save); + toScan.and(vertices); + } + return vertices.isSubset(c); + } + + /** + * Checks if the given induced subgraph of this target graph is biconnected. + * @param vertices the set of vertices inducing the subraph + * @return {@code true} if the subgrpah is biconnected; {@code false} otherwise + */ + public boolean isBiconnected(BitSet vertices) { + // if (!isConnected(vertices)) { + // return false; + // } + dfCount = 1; + dfn = new int[n]; + low = new int[n]; + + for (int v = 0; v < n; v++) { + if (!vertices.get(v)) { + dfn[v] = -1; + } + } + + int s = vertices.nextSetBit(0); + dfn[s] = dfCount++; + low[s] = dfn[s]; + + boolean first = true; + for (int i = 0; i < degree[s]; i++) { + int v = neighbor[s][i]; + if (dfn[v] != 0) { + continue; + } + if (!first) { + return false; + } + boolean b = dfsForBiconnectedness(v); + if (!b) return false; + else { + first = false; + } + } + return true; + } + + /** + * Depth-first search for deciding biconnectivigy. + * @param v vertex to be visited + * @return {@code true} if articulation point is found + * in the search starting from {@cod v}, {@false} otherwise + */ + private boolean dfsForBiconnectedness(int v) { + dfn[v] = dfCount++; + low[v] = dfn[v]; + for (int i = 0; i < degree[v]; i++) { + int w = neighbor[v][i]; + if (dfn[w] > 0 && dfn[w] < low[v]) { + low[v] = dfn[w]; + } + else if (dfn[w] == 0) { + boolean b = dfsForBiconnectedness(w); + if (!b) { + return false; + } + if (low[w] >= dfn[v]) { + return false; + } + if (low[w] < low[v]) { + low[v] = low[w]; + } + } + } + return true; + } + + + /** + * Checks if the given induced subgraph of this target graph is triconnected. + * This implementation is naive and call isBiconnected n times, where n is + * the number of vertices + * @param vertices the set of vertices inducing the subraph + * @return {@code true} if the subgrpah is triconnected; {@code false} otherwise + */ + public boolean isTriconnected(BitSet vertices) { + if (!isBiconnected(vertices)) { + return false; + } + + BitSet work = (BitSet) vertices.clone(); + int prev = -1; + for (int v = vertices.nextSetBit(0); v >= 0; + v = vertices.nextSetBit(v + 1)) { + if (prev >= 0) { + work.set(prev); + } + prev = v; + work.clear(v); + if (!isBiconnected(work)) { + return false; + } + } + return true; + } + + /** + * Compute articulation vertices of the subgraph of this + * target graph induced by the given set of vertices + * Assumes this subgraph is connected; otherwise, only + * those articulation vertices in the first connected component + * are obtained. + * + * @param vertices the set of vertices of the subgraph + * @return the set of articulation vertices + */ + public VertexSet articulations(BitSet vertices) { + articulationSet = new VertexSet(n); + dfCount = 1; + dfn = new int[n]; + low = new int[n]; + + for (int v = 0; v < n; v++) { + if (!vertices.get(v)) { + dfn[v] = -1; + } + } + + depthFirst(vertices.nextSetBit(0)); + return articulationSet; + } + + /** + * Depth-first search for listing articulation vertices. + * The articulations found in the search are + * added to the {@code VertexSet articulationSet}. + * @param v vertex to be visited + */ + private void depthFirst(int v) { + dfn[v] = dfCount++; + low[v] = dfn[v]; + for (int i = 0; i < degree[v]; i++) { + int w = neighbor[v][i]; + if (dfn[w] > 0) { + low[v] = Math.min(low[v], dfn[w]); + } + else if (dfn[w] == 0) { + depthFirst(w); + if (low[w] >= dfn[v] && + (dfn[v] > 1 || !lastNeighborIndex(v, i))){ + articulationSet.set(v); + } + low[v] = Math.min(low[v], low[w]); + } + } + } + + public ArrayList< VertexSet > getBiconnectedComponents(VertexSet articulationSet){ + dfCount = 1; + dfn = new int[n]; + low = new int[n]; + + ArrayList< VertexSet > bcc = new ArrayList< >(); + Stack< VertexSet > stack = new Stack< >(); + dfsForBiconnectedDecomposition(0, stack, bcc, articulationSet); + + VertexSet bc = new VertexSet(); + while(!stack.isEmpty()){ + bc.or(stack.pop()); + } + bcc.add(bc); + + return bcc; + } + + private void dfsForBiconnectedDecomposition(int v, + Stack< VertexSet > stack, ArrayList< VertexSet > bcc, VertexSet articulationSet){ + dfn[v] = dfCount++; + low[v] = dfn[v]; + for (int i = 0; i < degree[v]; i++) { + int w = neighbor[v][i]; + if (dfn[w] > 0) { + low[v] = Math.min(low[v], dfn[w]); + } + else if (dfn[w] == 0) { + VertexSet edge = new VertexSet(new int[]{v, w}); + stack.push(edge); + dfsForBiconnectedDecomposition(w, stack, bcc, articulationSet); + if (low[w] >= dfn[v] && + (dfn[v] > 1 || !lastNeighborIndex(v, i))){ + articulationSet.set(v); + VertexSet bc = new VertexSet(); + while(!stack.peek().equals(edge)){ + bc.or(stack.pop()); + } + bc.or(stack.pop()); + bcc.add(bc); + } + low[v] = Math.min(low[v], low[w]); + } + } + } + + /** + * Decides if the given index is the effectively + * last index of the neighbor array of the given vertex, + * ignoring vertices not in the current subgraph + * considered, which is known by their dfn being -1. + * @param v the vertex in question + * @param i the index in question + * @return {@code true} if {@code i} is effectively + * the last index of the neighbor array of vertex {@code v}; + * {@code false} otherwise. + */ + + private boolean lastNeighborIndex(int v, int i) { + for (int j = i + 1; j < degree[v]; j++) { + int w = neighbor[v][j]; + if (dfn[w] == 0) { + return false; + } + } + return true; + } + + /** + * fill the specified vertex set into a clique + * @param vertexSet vertex set to be filled + */ + public void fill(VertexSet vertexSet) { + for (int v = vertexSet.nextSetBit(0); v >= 0; + v = vertexSet.nextSetBit(v + 1)) { + VertexSet missing = vertexSet.subtract(neighborSet[v]); + for (int w = missing.nextSetBit(v + 1); w >= 0; + w = missing.nextSetBit(w + 1)) { + addEdge(v, w); + } + } + } + + /** + * fill the specified vertex set into a clique + * @param vertices int array listing the vertices in the set + */ + public void fill(int[] vertices) { + for (int i = 0; i < vertices.length; i++) { + for (int j = i + 1; j < vertices.length; j++) { + addEdge(vertices[i], vertices[j]); + } + } + } + + /** list all maximal cliques of this graph + * Naive implementation, should be replaced by a better one + * @return + */ + public ArrayList listMaximalCliques() { + ArrayList list = new ArrayList<>(); + VertexSet subg = new VertexSet(n); + VertexSet cand = new VertexSet(n); + VertexSet qlique = new VertexSet(n); + subg.set(0,n); + cand.set(0,n); + listMaximalCliques(subg, cand, qlique, list); + return list; + } + + /** + * Auxiliary recursive method for listing maximal cliques + * Adds to {@code list} all maximal cliques + * @param subg + * @param cand + * @param clique + * @param list + */ + private void listMaximalCliques(VertexSet subg, VertexSet cand, + VertexSet qlique, ArrayList list) { + if(subg.isEmpty()){ + list.add((VertexSet)qlique.clone()); + return; + } + int max = -1; + VertexSet u = new VertexSet(n); + for(int i=subg.nextSetBit(0);i>=0;i=subg.nextSetBit(i+1)){ + VertexSet tmp = new VertexSet(n); + tmp.set(i); + tmp = neighborSet(tmp); + tmp.and(cand); + if(tmp.cardinality() > max){ + max = tmp.cardinality(); + u = tmp; + } + } + VertexSet candu = (VertexSet) cand.clone(); + candu.andNot(u); + while(!candu.isEmpty()){ + int i = candu.nextSetBit(0); + VertexSet tmp = new VertexSet(n); + tmp.set(i); + qlique.set(i); + VertexSet subgq = (VertexSet) subg.clone(); + subgq.and(neighborSet(tmp)); + VertexSet candq = (VertexSet) cand.clone(); + candq.and(neighborSet(tmp)); + listMaximalCliques(subgq,candq,qlique,list); + cand.clear(i); + candu.clear(i); + qlique.clear(i); + } + } + + /** + * Saves this target graph in the file specified by a path string, + * in .gr format. + * A stack trace will be printed if the file is not available for writing + * @param path the path-string + */ + public void save(String path) { + File outFile = new File(path); + PrintStream ps; + try { + ps = new PrintStream(new FileOutputStream(outFile)); + writeTo(ps); + ps.close(); + } catch (FileNotFoundException e) { + e.printStackTrace(); + } + } + /** + * Write this target graph in .gr format to the given + * print stream. + * @param ps print stream + */ + public void writeTo(PrintStream ps) { + int m = 0; + for (int i = 0; i < n; i++) { + m += degree[i]; + } + m = m / 2; + ps.println("p tw " + n + " " + m); + for (int i = 0; i < n; i++) { + for (int j = 0; j < degree[i]; j++) { + int k = neighbor[i][j]; + if (i < k) { + ps.println((i + 1) + " " + (k + 1)); + } + } + } + } + + /** + * Create a copy of this target graph + * @return the copy of this graph + */ + public Graph copy() { + Graph tmp = new Graph(n); + for (int v = 0; v < n; v++) { + if(neighbor[v] != null){ + for (int j = 0; j < neighbor[v].length; j++) { + int w = neighbor[v][j]; + tmp.addEdge(v, w); + } + } + } + return tmp; + } + + /** + * Check consistency of this graph + * + */ + public void checkConsistency() throws RuntimeException { + for (int v = 0; v < n; v++) { + for (int w = 0; w < n; w++) { + if (v == w) continue; + if (indexOf(v, neighbor[w]) >= 0 && + indexOf(w, neighbor[v]) < 0) { + throw new RuntimeException("adjacency lists inconsistent " + v + ", " + w); + } + if (neighborSet[v].get(w) && + !neighborSet[v].get(w)) { + throw new RuntimeException("neighborSets inconsistent " + v + ", " + w); + } + } + } + } + /** + * Create a random graph with the given number of vertices and + * the given number of edges + * @param n the number of vertices + * @param m the number of edges + * @param seed the seed for the pseudo random number generation + * @return {@code Graph} instance constructed + */ + public static Graph randomGraph(int n, int m, int seed) { + Random random = new Random(seed); + Graph g = new Graph(n); + + int k = 0; + int j = 0; + int m0 = n * (n - 1) / 2; + for (int v = 0; v < n; v++) { + for (int w = v + 1; w < n; w++) { + int r = random.nextInt(m0 - j); + if (r < m - k) { + g.addEdge(v, w); + g.addEdge(w, v); + k++; + } + j++; + } + } + return g; + } + + public static void main(String args[]) { + // an example of the use of random graph generation + for(int i = 0; i < 100; i++){ + Graph g = randomGraph(10, 30, i); + g.save("instance/random/gnm_10_30_" + i + ".gr"); + } + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/GreedyDecomposer.java b/solvers/TCS-Meiji/tw/heuristic/GreedyDecomposer.java new file mode 100644 index 0000000..bf8db2d --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/GreedyDecomposer.java @@ -0,0 +1,400 @@ +/* + * Copyright (c) 2017, Takuto Sato and Hiromu Ohtsuka +*/ + +package tw.heuristic; + +import java.io.File; + +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.PrintStream; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Comparator; +import java.util.HashSet; +import java.util.LinkedList; +import java.util.Queue; +import java.util.Set; +import java.util.TreeSet; +import java.util.HashMap; +import java.util.Map; +import java.util.TreeMap; +import java.util.List; + +public class GreedyDecomposer { + // static final boolean VERBOSE = true; + private static final boolean VERBOSE = false; + private static boolean DEBUG = false; + //private static boolean DEBUG = false; + + private final static int GRAPH_EDGE_SIZE = 1_000_000; + private final static int GRAPH_VERTEX_SIZE = 10_000; + + Graph g; + + Bag whole; + + Map> frontier; + + VertexSet remaining; + + TreeSet minCostSet; + + Edge[] edges; + int[] oldFill; + + boolean modeMinDegree; + boolean modeExact; + + boolean timeOn; + boolean abort; + long step; + long timeLimit; + static final long STEPS_PER_MS = 1; + + public GreedyDecomposer(Bag whole) { + this.whole = whole; + this.g = whole.graph.copy(); + } + + public void decompose() { + timeOn = false; + abort = false; + step = 0; + + selectDecompose(); + } + + public boolean decompose(long timeMS){ + timeOn = true; + abort = false; + step = 0; + timeLimit = STEPS_PER_MS * timeMS; + + selectDecompose(); + + if(abort){ + return false; + } + return true; + } + + public boolean isAborted(){ + return timeOn && abort; + } + + public long getTimeMS(){ + return step / STEPS_PER_MS; + } + + private void selectDecompose() { + int sum = 0; + for(int v = 0; v < g.n; v++) { + sum += g.degree[v]; + } + sum /= 2; + + if(sum <= GRAPH_EDGE_SIZE) { + modeMinDegree = false; + //if(g.n <= GRAPH_VERTEX_SIZE) { + if(sum <= GRAPH_VERTEX_SIZE) { + modeExact = true; + } + else { + modeExact = false; + } + } + else { + modeMinDegree = true; + modeExact = false; + } + + mainDecompose(); + } + + public void minFillDecompose() { + modeMinDegree = false; + mainDecompose(); + } + + public void minDegreeDecompose() { + modeMinDegree = true; + mainDecompose(); + } + + private void initialize() { + whole.initializeForDecomposition(); + + frontier = new HashMap<>(); + for(int i = 0; i < g.n; i++) { + frontier.put(i, new HashSet<>()); + } + + remaining = (VertexSet) g.all.clone(); + + minCostSet = new TreeSet<>(); + + if(modeExact) { + oldFill = new int[g.n]; + } + + for(int v = 0; v < g.n; v++) { + int cost = 0; + if(modeMinDegree) { + cost = g.degree[v]; + } + else { + cost = costOf(v); + } + Pair p = new Pair(v, cost); + minCostSet.add(p); + if(modeExact) { + oldFill[v] = cost; + } + } + } + + private void mainDecompose() { + initialize(); + + while (!remaining.isEmpty()) { + if(timeOn && step > timeLimit){ + abort = true; + return; + } + + int vmin; + if(modeExact) { + Pair p = minCostSet.first(); + minCostSet.remove(p); + vmin = p.v; + int cost = fillCount(vmin); + } + else { + vmin = delayProcess(); + } + + Set vminInSeparators = frontier.get(vmin); + if(vminInSeparators.size() == 1) { + VertexSet neighborSet = g.neighborSet[vmin].intersectWith(remaining); + Separator uniqueSeparator = null; + for(Separator s : vminInSeparators) { + uniqueSeparator = s; + } + + if(neighborSet.isSubset(uniqueSeparator.vertexSet)) { + uniqueSeparator.removeVertex(vmin); + if(uniqueSeparator.vertexSet.isEmpty()) { + whole.separators.remove(uniqueSeparator); + for(Bag b : uniqueSeparator.incidentBags) { + b.incidentSeparators.remove(uniqueSeparator); + } + } + remaining.clear(vmin); + + if(!modeMinDegree && modeExact) { + VertexSet vs = g.neighborSet[vmin].intersectWith(remaining); + VertexSet updateSet = g.closedNeighborSet(vs); + updateSet.and(remaining); + updateProcess(updateSet); + } + continue; + } + } + + VertexSet toBeAClique = new VertexSet(g.n); + toBeAClique.set(vmin); + toBeAClique.or(g.neighborSet[vmin].intersectWith(remaining)); + Bag bag = whole.addNestedBag(toBeAClique); + + VertexSet sep = toBeAClique.subtract(new VertexSet(new int[]{vmin})); + + if(modeMinDegree) { + for(int v = sep.nextSetBit(0); v >= 0; v = sep.nextSetBit(v + 1)) { + g.neighborSet[v].or(sep); + g.neighborSet[v].clear(v); + } + } + else { + if(edges != null) { + for(Edge e : edges) { + g.addEdge(e.v, e.w); + } + } + } + + if (!sep.isEmpty()) { + Separator separator = whole.addSeparator(sep); + + separator.addIncidentBag(bag); + bag.addIncidentSeparator(separator); + + for(int v = separator.vertexSet.nextSetBit(0); v >= 0; v = separator.vertexSet.nextSetBit(v + 1)) { + frontier.get(v).add(separator); + } + } + + for (Separator s : vminInSeparators) { + s.addIncidentBag(bag); + bag.addIncidentSeparator(s); + + for(int v = s.vertexSet.nextSetBit(0); v >= 0; v = s.vertexSet.nextSetBit(v + 1)) { + if(v != vmin){ + frontier.get(v).remove(s); + } + } + } + frontier.remove(vmin, vminInSeparators); + + remaining.clear(vmin); + + if(!modeMinDegree && modeExact) { + VertexSet vs = g.neighborSet[vmin].intersectWith(remaining); + VertexSet updateSet = g.closedNeighborSet(vs); + updateSet.and(remaining); + updateProcess(updateSet); + } + } + + whole.setWidth(); + } + + private void updateProcess(VertexSet updateSet) { + for(int v = updateSet.nextSetBit(0); v >= 0; v = updateSet.nextSetBit(v + 1)) { + int fill = fillCount(v); + + Pair old = new Pair(v, oldFill[v]); + Pair update = new Pair(v, fill); + + minCostSet.remove(old); + minCostSet.add(update); + oldFill[v] = fill; + } + } + + private int delayProcess() { + Pair p = minCostSet.first(); + for(;;) { + if(p.cost == 0) { + minCostSet.remove(p); + edges = null; + break; + } + int cost = costOf(p.v); + if(cost <= p.cost) { + minCostSet.remove(p); + break; + } + else { + minCostSet.remove(p); + Pair q = new Pair(p.v, cost); + minCostSet.add(q); + p = null; + p = minCostSet.first(); + } + } + return p.v; + } + + private int costOf(int v) { + ++step; + if(modeMinDegree) { + return degreeOf(v); + } + else { + return fillCount(v); + } + } + + private int degreeOf(int v) { + VertexSet vs = g.neighborSet[v].intersectWith(remaining); + return vs.cardinality(); + } + + private int fillCount(int v) { + VertexSet vNeighborSet = remaining.intersectWith(g.neighborSet[v]); + ArrayList addEdges = new ArrayList<>(); + int count = 0; + + for(int w = vNeighborSet.nextSetBit(0); w >= 0; w = vNeighborSet.nextSetBit(w + 1)) { + VertexSet noNeighborSet = vNeighborSet.subtract(vNeighborSet.intersectWith(g.neighborSet[w])); + noNeighborSet.and(remaining); + noNeighborSet.clear(w); + for(int x = noNeighborSet.nextSetBit(w); x >= 0; x = noNeighborSet.nextSetBit(x + 1)) { + Edge e = new Edge(w, x); + addEdges.add(e); + count++; + } + } + edges = addEdges.toArray(new Edge[0]); + return count; + } + + private class Pair implements Comparable { + int v; + int cost; + + Pair(int v, int cost) { + this.v = v; + this.cost = cost; + } + + @Override + public int compareTo(Pair p){ + if(cost != p.cost){ + return Integer.compare(cost, p.cost); + } + return Integer.compare(v, p.v); + } + + @Override + public boolean equals(Object obj){ + if(!(obj instanceof Pair)){ + return false; + } + Pair p = (Pair)obj; + return v == p.v && + cost == p.cost; + } + + @Override + public String toString() { + return "v : " + v + ", cost : " + cost; + } + } + + private class Edge { + int v; + int w; + + public Edge(int v, int w) { + this.v = v; + this.w = w; + } + + @Override + public boolean equals(Object obj) { + if(!(obj instanceof Edge)){ + return false; + } + Edge e = (Edge) obj; + return (v == e.v && w == e.w) || (v == e.w && w == e.v); + } + + @Override + public int hashCode() { + int seed = 1234; + return seed ^ v ^ w; + } + } + + private static void test() { + } + + private static void targetTest() { + } + + public static void main(String[] args) { + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/LayeredSieve.java b/solvers/TCS-Meiji/tw/heuristic/LayeredSieve.java new file mode 100644 index 0000000..d12acb3 --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/LayeredSieve.java @@ -0,0 +1,53 @@ +/* + * Copyright (c) 2017, Hisao Tamaki +*/ + +package tw.heuristic; + +import java.util.ArrayList; + +public class LayeredSieve { + int n; + int targetWidth; + BlockSieve sieves[]; + + public LayeredSieve(int n, int targetWidth) { + this.n = n; + this.targetWidth = targetWidth; + + int k = 33 - Integer.numberOfLeadingZeros(targetWidth); + sieves = new BlockSieve[k]; + for (int i = 0; i < k; i++) { + int margin = (1 << i) - 1; + sieves[i] = new BlockSieve(n, targetWidth, margin); + } + } + + public void put(VertexSet vertices, VertexSet neighbors) { + int ns = neighbors.cardinality(); + int margin = targetWidth + 1 - ns; + int i = 32 - Integer.numberOfLeadingZeros(margin); + sieves[i].put(vertices, neighbors); + } + + public void put(VertexSet vertices, int neighborSize, VertexSet value) { + int margin = targetWidth + 1 - neighborSize; + int i = 32 - Integer.numberOfLeadingZeros(margin); + sieves[i].put(vertices, value); + } + + public void collectSuperblocks(VertexSet component, VertexSet neighbors, + ArrayList list) { + for (BlockSieve sieve: sieves) { + sieve.collectSuperblocks(component, neighbors, list); + } + } + + public int[] getSizes() { + int sizes[] = new int[sieves.length]; + for (int i = 0; i < sieves.length; i++) { + sizes[i] = sieves[i].size(); + } + return sizes; + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/MTDecomposerHeuristic.java b/solvers/TCS-Meiji/tw/heuristic/MTDecomposerHeuristic.java new file mode 100644 index 0000000..65bc36e --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/MTDecomposerHeuristic.java @@ -0,0 +1,1316 @@ +/* + * Copyright (c) 2017, Hisao Tamaki and Hiromu Otsuka +*/ + +package tw.heuristic; + +import java.io.File; +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.PrintStream; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.HashSet; +import java.util.LinkedList; +import java.util.Map; +import java.util.Queue; +import java.util.Set; + +public class MTDecomposerHeuristic { + + //static final boolean VERBOSE = true; + private static final boolean VERBOSE = false; +// private static boolean DEBUG = true; + static boolean DEBUG = false; + + private static final long STEPS_PER_MS = 25; + + Graph g; + + int maxMultiplicity; + + Bag currentBag; + + LayeredSieve tBlockSieve; + + Queue readyQueue; + + ArrayList pendingEndorsers; + +// Set processed; + + Map tBlockCache; + + Map blockCache; + + Map mBlockCache; + + Set pmcCache; + + int upperBound; + int lowerBound; + + int targetWidth; + + PMC solution; + + boolean abort; + + //SafeSeparator ss; + + static int TIMEOUT_CHECK = 100; + + boolean counting; + long numberOfPlugins; + + long timeLimit; + + //int count; + long count; + long sumCount; + CPUTimer timer; + File logFile; + + int tbCount; + int siCount; + + public MTDecomposerHeuristic(Bag bag, + int lowerBound, int upperBound, + File logFile, CPUTimer timer, long timeMS) { + this.timeLimit = STEPS_PER_MS * timeMS; + this.logFile = logFile; + this.timer = timer; + this.timeLimit = timeLimit; + + currentBag = bag; + g = bag.graph; + if (!g.isConnected(g.all)) { + System.err.println("graph must be connected, size = " + bag.size); + } + this.lowerBound = lowerBound; + this.upperBound = upperBound; + + //ss = new SafeSeparator(g); + + } + + private MTDecomposerHeuristic(Bag bag, + int lowerBound, int upperBound, + File logFile, CPUTimer timer, + long count, long timeLimit) { + this.logFile = logFile; + this.timer = timer; + this.timeLimit = timeLimit; + this.count = count; + + currentBag = bag; + g = bag.graph; + if (!g.isConnected(g.all)) { + System.err.println("graph must be connected, size = " + bag.size); + } + this.lowerBound = lowerBound; + this.upperBound = upperBound; + + //ss = new SafeSeparator(g); + } + + public void setMaxMultiplicity(int m) { + maxMultiplicity = m; + } + + public boolean isAborted(){ + return abort; + } + + public long getTimeMS(){ + return (count + sumCount) / STEPS_PER_MS; + } + + public boolean decompose() { + abort = false; + blockCache = new HashMap<>(); + mBlockCache = new HashMap<>(); + + pendingEndorsers = new ArrayList<>(); + pmcCache = new HashSet<>(); + + //return decompose(lowerBound); + boolean result = decompose(lowerBound); + //System.out.println("c count = " + count); + return result; + } + + public boolean decompose(int targetWidth) { + if (counting) { + numberOfPlugins = 0; + } + if (VERBOSE) { + System.out.println("deompose enter, n = " + currentBag.size + + ", targetWidth = " + targetWidth); + } + if (targetWidth > upperBound) { + return false; + } + this.targetWidth = targetWidth; + + //count = 0; + tbCount = 0; + siCount = 0; + + if (currentBag.size <= targetWidth + 1) { + currentBag.nestedBags = null; + currentBag.separators = null; + return true; + } + + // endorserMap = new HashMap<>(); + + tBlockSieve = new LayeredSieve(g.n, targetWidth); + tBlockCache = new HashMap<>(); + + readyQueue = new LinkedList<>(); + + readyQueue.addAll(mBlockCache.values()); + + for (int v = 0; v < g.n; v++) { + VertexSet cnb = (VertexSet) g.neighborSet[v].clone(); + cnb.set(v); + + if (DEBUG) { + System.out.println(v + ":" + cnb.cardinality() + ", " + cnb); + } + + if (cnb.cardinality() > targetWidth + 1) { + continue; + } + +// if (!pmcCache.contains(cnb)) { + PMC pmc = new PMC(cnb, getBlocks(cnb)); + if (pmc.isValid) { +// pmcCache.add(cnb); + if (pmc.isReady()) { + pmc.endorse(); + } + else { + pendingEndorsers.add(pmc); + } +// } + } + } + + while (true) { + while (!readyQueue.isEmpty()) { + //count++; + /* + if (count > TIMEOUT_CHECK) { + count = 0; + if (timer != null && timer.hasTimedOut()) { + log("**TIMEOUT**"); + return false; + } + } + */ + + if(count > timeLimit){ + abort = true; + return false; + } + + MBlock ready = readyQueue.remove(); + + ready.process(); + + if (solution != null && !counting) { + log("solution found"); + Bag bag = currentBag.addNestedBag(solution.vertexSet); + solution.carryOutDecomposition(bag); + return true; + } + } + + if (!pendingEndorsers.isEmpty()) { + log("queue empty"); + } + + ArrayList endorsers = pendingEndorsers; + pendingEndorsers = new ArrayList(); + for (PMC endorser : endorsers) { + endorser.process(); + if (solution != null && !counting) { + log("solution found"); + Bag bag = currentBag.addNestedBag(solution.vertexSet); + solution.carryOutDecomposition(bag); + return true; + } + } + if (readyQueue.isEmpty()) { + break; + } + } + + if (counting && solution != null) { + log("solution found"); + System.out.println("IBlocks: " + mBlockCache.size() + + ", TBlocks: " + tBlockCache.size() + + ", PMCs: " + pmcCache.size() + + ", numuberOfPlugins = " + numberOfPlugins); + Bag bag = currentBag.addNestedBag(solution.vertexSet); + solution.carryOutDecomposition(bag); + return true; + } + + log("failed"); + +// ArrayList targets = new ArrayList<>(); +// targets.add(currentBag.vertexSet); +// +// ArrayList endorseds = +// new ArrayList<>(); +// endorseds.addAll(mBlockCache.keySet()); +// +// SafeSeparator ss = new SafeSeparator(g); + Set safeSeparators = new HashSet<>(); + +// Collections.sort(endorseds, +// (s, t)-> t.cardinality() - s.cardinality()); +// +// for (VertexSet endorsed: endorseds) { +// VertexSet targetToSplit = null; +// for (VertexSet compo: targets) { +// if (endorsed.isSubset(compo)) { +// targetToSplit = compo; +// break; +// } +// } +// if (targetToSplit == null) { +// continue; +// } +// VertexSet separatorSet = g.neighborSet(endorsed); +// if (safeSeparators.contains(separatorSet)) { +// continue; +// } +// boolean available = true; +// +// for (VertexSet safe: safeSeparators) { +// if (crossesOrSubsumes(safe, endorsed, separatorSet)) { +// available = false; +// System.out.println(separatorSet + " is crossed or subsumed by " + safe); +// break; +// } +// } +// +// if (!available) { +// continue; +// } +// +// if (ss.isOneWaySafe(separatorSet, endorsed)) { +// if (separatorSet.isEmpty()) { +// System.err.println("empty safe separator, endorsed = " + endorsed); +// } +// if (VERBOSE) { +// System.out.println("safe separator found: " + separatorSet + +// ", splitting off " + endorsed); +// } +// +// safeSeparators.add(separatorSet); +// } +// } + + if (safeSeparators.isEmpty()) { +// System.out.println("no safe separators, advancing to " + (targetWidth + 1)); + return decompose(targetWidth + 1); + } + if (VERBOSE) { + log(currentBag.size + " vertices split by " + + safeSeparators.size() + " safe separators"); + } + ArrayList bagsToDecompose = new ArrayList<>(); + bagsToDecompose.add(currentBag.addNestedBag(g.all)); + + for (VertexSet separatorSet: safeSeparators) { + + // find the bag containing the separator + Bag bagToSplit = null; + for (Bag bag: bagsToDecompose) { + if (separatorSet.isSubset(bag.vertexSet)) { + bagToSplit = bag; + break; + } + } + + if (bagToSplit == null) { +// System.out.println("cannot find bag to split for " + separatorSet); + continue; + } + + Separator separator = currentBag.addSeparator(separatorSet); + +// System.out.println("incorporating safe separator: " + separatorSet); +// System.out.println("splitting bag: " + bagToSplit.vertexSet); + bagsToDecompose.remove(bagToSplit); + currentBag.nestedBags.remove(bagToSplit); + ArrayList bagsToAdd = new ArrayList<>(); + + ArrayList components = g.getComponents(separatorSet); + for (VertexSet compo: components) { + Bag bag = null; + MBlock mBlock = getMBlock(compo); + if (mBlock != null) { + if (mBlock.endorser.outbound.separator.equals(separatorSet)) { + bag = currentBag.addNestedBag(mBlock.endorser.vertexSet); +// System.out.println("carrying out decomposition on " + bag.vertexSet); + mBlock.endorser.carryOutDecomposition(bag); + } + } + if (bag != null) { + separator.incidentBags.add(bag); + bag.incidentSeparators.add(separator); + continue; + } + VertexSet ns = g.neighborSet(compo).intersectWith(separatorSet); + if (ns.equals(separatorSet)) { + VertexSet intersection = + g.closedNeighborSet(compo).intersectWith(bagToSplit.vertexSet); + if (!intersection.isEmpty()) { + Bag bagToAdd = currentBag.addNestedBag(intersection); +// System.out.println("added " + bagToAdd.vertexSet); + bagsToDecompose.add(bagToAdd); + bagsToAdd.add(bagToAdd); + separator.incidentBags.add(bagToAdd); + bagToAdd.incidentSeparators.add(separator); + } + } + } + Bag bag0 = separator.incidentBags.get(0); + for (VertexSet compo: components) { + VertexSet ns = g.neighborSet(compo).intersectWith(separatorSet); + if (!ns.equals(separatorSet)) { + VertexSet intersection = + g.closedNeighborSet(compo).intersectWith(bagToSplit.vertexSet); + if (!intersection.isEmpty()) { + Bag bagToAdd = currentBag.addNestedBag(intersection); + bagsToDecompose.add(bagToAdd); + bagsToAdd.add(bagToAdd); +// System.out.println("added " + bagToAdd.vertexSet); + Separator separator1 = currentBag.addSeparator(ns); + separator1.incidentBags.add(bagToAdd); + separator1.incidentBags.add(bag0); + bagToAdd.incidentSeparators.add(separator1); + bag0.incidentSeparators.add(separator1); + } + } + } + + // distribute the separators incident to the bag to split + // to split bags + for (Separator sep: bagToSplit.incidentSeparators) { + sep.incidentBags.remove(bagToSplit); + for (Bag bagToAdd: bagsToAdd) { + if (sep.vertexSet.isSubset(bagToAdd.vertexSet)) { + sep.incidentBags.add(bagToAdd); + bagToAdd.incidentSeparators.add(sep); + break; + } + } + } + } + for (Bag bag: bagsToDecompose) { +// System.out.println("incident separators of " + bag.vertexSet); +// for (Separator s: bag.incidentSeparators) { +// System.out.println(" " + s.vertexSet); +// for (Bag b: s.incidentBags) { +// System.out.println(" " + b.vertexSet); +// } +// } + bag.makeRefinable(); +// bag.graph.writeTo(System.out); + MTDecomposerHeuristic mtd = new MTDecomposerHeuristic(bag, targetWidth + 1, upperBound, + logFile, timer, count, timeLimit); + if(!mtd.decompose()){ + sumCount += (mtd.sumCount + mtd.count); + abort |= mtd.isAborted(); + return false; + } + sumCount += (mtd.sumCount + mtd.count); + } + return true; + } + + boolean crossesOrSubsumes(VertexSet separator1, VertexSet endorsed, VertexSet separator2) { + ArrayList components = g.getComponents(separator1); + for (VertexSet compo: components) { + if (endorsed.isSubset(compo)) { + // subsumes + return true; + } + } + // test crossing + VertexSet diff = separator2.subtract(separator1); + for (VertexSet compo: components) { + if (diff.isSubset(compo)) { + return false; + } + } + return true; + } + + Block getBlock(VertexSet component) { + Block block = blockCache.get(component); + if (block == null) { + block = new Block(component); + blockCache.put(component, block); + } + return block; + } + + void makeMBlock(VertexSet component, PMC endorser) { + MBlock mBlock = mBlockCache.get(component); + if (mBlock == null) { + Block block = getBlock(component); + mBlock = new MBlock(block, endorser); + blockCache.put(component, block); + } + } + + MBlock getMBlock(VertexSet component) { + return mBlockCache.get(component); + } + + void checkAgainstDecompositionInFile(String path, String name) { + TreeDecomposition referenceTd = TreeDecomposition.readDecomposition(path, + name, g); + checkAgainstDecomposition(referenceTd); + } + + void checkAgainstDecomposition(TreeDecomposition referenceTd) { + // referenceTd.minimalize(); + referenceTd.canonicalize(); + // System.out.println("reference decomposition minimalized"); + System.out.println("reference decomposition canonicalized"); + referenceTd.validate(); + + System.out.println("is canonical: " + referenceTd.isCanonical()); + + for (int i = 1; i <= referenceTd.nb; i++) { + PMC endorser = new PMC(referenceTd.bagSets[i]); + + VertexSet target = null; + if (endorser.outbound != null) { + target = endorser.getTarget(); + } + + if (endorser.isReady() && target != null + && getMBlock(target) == null) { + System.out.println("endorser ready:\n" + endorser); + System.out.println("but target not endorsed: " + target + "(" + + g.neighborSet(target) + ")\n"); + + VertexSet inletsUnion = endorser.inletsInduced(); + + VertexSet delta1 = endorser.vertexSet.subtract(inletsUnion); + VertexSet delta2 = endorser.vertexSet + .subtract(endorser.outbound.separator); + System.out.println("delta1 = " + delta1); + for (int v = delta1.nextSetBit(0); v >= 0; v = delta1 + .nextSetBit(v + 1)) { + System.out.println(" " + v + "(" + g.neighborSet[v] + ")"); + } + System.out.println("delta2 = " + delta2); + for (int v = delta2.nextSetBit(0); v >= 0; v = delta2 + .nextSetBit(v + 1)) { + System.out.println(" " + v + "(" + g.neighborSet[v] + ")"); + } + + TBlock tBlock = tBlockCache.get(inletsUnion); + System.out.println(" underlying tBlock = " + tBlock); + } else if (target == null) { + System.out.println("endorser without target, isReady = " + + (endorser.isReady()) + " :\n" + endorser); + } else if (!endorser.isReady()) { + System.out.println("endorser not ready:\n" + endorser); + System.out.println("target = " + target); + } + } + } + + boolean isFullComponent(VertexSet component, VertexSet sep) { + for (int v = sep.nextSetBit(0); v >= 0; v = sep.nextSetBit(v + 1)) { + if (component.isDisjoint(g.neighborSet[v])) { + return false; + } + } + return true; + } + + ArrayList getBlocks(VertexSet separator) { + ArrayList result = new ArrayList(); + VertexSet rest = g.all.subtract(separator); + for (int v = rest.nextSetBit(0); v >= 0; v = rest.nextSetBit(v + 1)) { + VertexSet c = g.neighborSet[v].subtract(separator); + VertexSet toBeScanned = (VertexSet) c.clone(); + c.set(v); + while (!toBeScanned.isEmpty()) { + VertexSet save = (VertexSet) c.clone(); + for (int w = toBeScanned.nextSetBit(0); w >= 0; w = toBeScanned + .nextSetBit(w + 1)) { + c.or(g.neighborSet[w]); + } + c.andNot(separator); + toBeScanned = c.subtract(save); + } + + Block block = getBlock(c); + result.add(block); + rest.andNot(c); + } + return result; + } + + class Block implements Comparable { + VertexSet component; + VertexSet separator; + VertexSet outbound; + + Block(VertexSet component) { + this.component = component; + this.separator = g.neighborSet(component); + + VertexSet rest = g.all.subtract(component); + rest.andNot(separator); + + int minCompo = component.nextSetBit(0); + + // the scanning order ensures that the first full component + // encountered is the outbound one + for (int v = rest.nextSetBit(0); v >= 0; v = rest.nextSetBit(v + 1)) { + VertexSet c = (VertexSet) g.neighborSet[v].clone(); + VertexSet toBeScanned = c.subtract(separator); + c.set(v); + while (!toBeScanned.isEmpty()) { + VertexSet save = (VertexSet) c.clone(); + for (int w = toBeScanned.nextSetBit(0); w >= 0; + w = toBeScanned.nextSetBit(w + 1)) { + c.or(g.neighborSet[w]); + } + toBeScanned = c.subtract(save).subtract(separator); + } + if (separator.isSubset(c)) { + // full block other than "component" found + if (v < minCompo) { + outbound = c.subtract(separator); + } + else { + // v > minCompo + outbound = component; + } + return; + } + rest.andNot(c); + } + } + + boolean isOutbound() { + return outbound == component; + } + + boolean ofMinimalSeparator() { + return outbound != null; + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + if (outbound == component) { + sb.append("o"); + } + else { + if (mBlockCache.get(component) != null) { + sb.append("f"); + } else { + sb.append("i"); + } + } + sb.append(component + "(" + separator + ")"); + return sb.toString(); + } + + @Override + public int compareTo(Block b) { + return component.nextSetBit(0) - b.component.nextSetBit(0); + } + } + + class MBlock { + Block block; + PMC endorser; + + MBlock(Block block, PMC endorser) { + this.block = block; + this.endorser = endorser; + + if (DEBUG) { + System.out.println("MBlock constructor" + this); + } + + } + + void process() { + if (DEBUG) { + System.out.print("processing " + this); + } + + makeSimpleTBlock(); + + ArrayList tBlockSeparators = new ArrayList<>(); + tBlockSieve.collectSuperblocks( + block.component, block.separator, tBlockSeparators); + + for (VertexSet tsep : tBlockSeparators) { + TBlock tBlock = tBlockCache.get(tsep); + tBlock.plugin(this); + } + } + + void makeSimpleTBlock() { + + if (DEBUG) { + System.out.print("makeSimple: " + this); + } + + TBlock tBlock = tBlockCache.get(block.separator); + if (tBlock == null) { + tBlock = new TBlock(block.separator, block.outbound, 1); + tBlockCache.put(block.separator, tBlock); + tBlockSieve.put(block.outbound, block.separator); + tBlock.crown(); + } + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("MBlock:" + block.separator + "\n"); + sb.append(" in :" + block.component + "\n"); + sb.append(" out :" + block.outbound + "\n"); + return sb.toString(); + } + } + + class TBlock { + VertexSet separator; + VertexSet openComponent; + int multiplicity; + + TBlock(VertexSet separator, VertexSet openComponent) { + this.separator = separator; + this.openComponent = openComponent; + } + TBlock(VertexSet separator, VertexSet openComponent, int multiplicity) { + this(separator, openComponent); + this.multiplicity = multiplicity; +// tbCount++; +// if (supportInduced()) { +// siCount++; +// } +// if (tbCount %10000 == 0) { +// System.out.println("support-induced / total-tblocks " + +// siCount + "/" + tbCount); +// } + } + + void plugin(MBlock mBlock) { + if (counting) { + numberOfPlugins++; + } + + ++count; + + if (DEBUG) { + System.out.println("plugin " + mBlock); + System.out.println(" to " + this); + } + + VertexSet newsep = separator.unionWith(mBlock.block.separator); + + if (newsep.cardinality() > targetWidth + 1) { + return; + } + + ArrayList blockList = getBlocks(newsep); + + Block fullBlock = null; + int nSep = newsep.cardinality(); + + for (Block block : blockList) { + if (block.separator.cardinality() == nSep) { + if (fullBlock != null) { +// minimal separator: treated elsewhere + return; + } + fullBlock = block; + } + } + + if (fullBlock == null) { +// if (!pmcCache.contains(newsep)) { + PMC pmc = new PMC(newsep, blockList); + if (pmc.isValid) { +// pmcCache.add(newsep); + if (pmc.isReady()) { + pmc.endorse(); + } + else { + pendingEndorsers.add(pmc); + } +// } + } + } + + else { + if (newsep.cardinality() > targetWidth) { + return; + } + TBlock tBlock = tBlockCache.get(newsep); + if (tBlock == null) { + tBlock = new TBlock(newsep, fullBlock.component, + multiplicity + 1); + tBlockCache.put(newsep, tBlock); + if (maxMultiplicity == 0 || + multiplicity < maxMultiplicity) { + tBlockSieve.put(fullBlock.component, newsep); + } + tBlock.crown(); + } + } + } + + boolean supportInduced() { + ArrayList blocks = getBlocks(separator); + VertexSet outlet = new VertexSet(g.n); + for (Block block: blocks) { + if (block.isOutbound() && + !block.separator.equals(separator)) { + if (outlet.isSubset(block.separator)) { + outlet = block.separator; + } + else if (!block.separator.isSubset(outlet)) { + return false; + } + } + } + VertexSet union = new VertexSet(g.n); + for (Block block: blocks) { + if (!block.isOutbound() && + !block.separator.isSubset(outlet)) { + union.or(block.separator); + } + } + return union.equals(separator); + } + + void crown() { + for (int v = separator.nextSetBit(0); v >= 0; + v = separator.nextSetBit(v + 1)) { + if (DEBUG) { + System.out.println("try crowing by " + v); + } + + VertexSet newsep = separator.unionWith( + g.neighborSet[v].intersectWith(openComponent)); + if (newsep.cardinality() <= targetWidth + 1) { + + if (DEBUG) { + System.out.println("crowing by " + v + ":" + this); + } +// if (!pmcCache.contains(newsep)) { + PMC pmc = new PMC(newsep); + if (pmc.isValid) { +// pmcCache.add(newsep); + if (pmc.isReady()) { + pmc.endorse(); + } + else { + pendingEndorsers.add(pmc); + } +// } + } + } + } + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("TBlock:\n"); + sb.append(" sep :" + separator + "\n"); + sb.append(" open:" + openComponent + "\n"); + return sb.toString(); + } + } + + class PMC { + VertexSet vertexSet; + Block inbounds[]; + Block outbound; + boolean isValid; + + PMC(VertexSet vertexSet) { + this(vertexSet, getBlocks(vertexSet)); + } + + PMC(VertexSet vertexSet, ArrayList blockList) { + this.vertexSet = vertexSet; + if (vertexSet.isEmpty()) { + return; + } + for (Block block: blockList) { + if (block.isOutbound() && + (outbound == null || + outbound.separator.isSubset(block.separator))){ + outbound = block; + } + } + if (outbound == null) { + inbounds = blockList.toArray( + new Block[blockList.size()]); + } + else { + inbounds = new Block[blockList.size()]; + int k = 0; + for (Block block: blockList) { + if (!block.separator.isSubset(outbound.separator)) { + inbounds[k++] = block; + } + } + if (k < inbounds.length) { + inbounds = Arrays.copyOf(inbounds, k); + } + } + checkValidity(); + + if (DEBUG +// || +// vertexSet.equals( +// new VertexSet(new int[]{0, 1, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 38, 39, 40, 41, 42, 43, 44, 45, 55, 56, 57, 58, 59, 60, 61, 66, 69})) + ) { + System.out.println("PMC created:"); + System.out.println(this); + } + } + + void checkValidity() { + for (Block b: inbounds) { + if (!b.ofMinimalSeparator()) { + isValid = false; + return; + } + } + + for (int v = vertexSet.nextSetBit(0); v >= 0; + v = vertexSet.nextSetBit(v + 1)) { + VertexSet rest = vertexSet.subtract(g.neighborSet[v]); + rest.clear(v); + if (outbound != null && outbound.separator.get(v)) { + rest.andNot(outbound.separator); + } + for (Block b : inbounds) { + if (b.separator.get(v)) { + rest.andNot(b.separator); + } + } + if (!rest.isEmpty()) { + isValid = false; + return; + } + } + isValid = true; + } + + boolean isReady() { + for (int i = 0; i < inbounds.length; i++) { + if (mBlockCache.get(inbounds[i].component) == null) { + return false; + } + } + return true; + } + + VertexSet getTarget() { + if (outbound == null) { + return null; + } + VertexSet combined = vertexSet.subtract(outbound.separator); + for (Block b: inbounds) { + combined.or(b.component); + } + return combined; + } + + + void process() { + if (DEBUG) { + System.out.print("processing " + this); + } + if (isReady()) { + if (DEBUG) { + System.out.print("endorsing " + this); + } + endorse(); + } + else { + pendingEndorsers.add(this); + } + } + + void endorse() { + + pmcCache.add(vertexSet); + if (DEBUG) { + System.out.print("endorsing " + this); + } + + if (DEBUG) { + System.out.println("ontbound= " + outbound); + } + + if (outbound == null) { + if (DEBUG) { + System.out.println("solution found in endorse()"); + } + solution = this; + return; + } + else { + endorse(getTarget()); + } + } + + void endorse(VertexSet target) { + if (DEBUG) { + System.out.println("endorsed = " + target); + } + + // if (separator.equals(bs1)) { + // System.err.println("endorsed = " + endorsed + + // ", " + endorserMap.get(endorsed)); + // } + // + + if (mBlockCache.get(target) == null) { + Block block = getBlock(target); + MBlock mBlock = new MBlock(block, this); + mBlockCache.put(target, mBlock); + + if (DEBUG) { + System.out.println("adding to ready queue" + mBlock); + } + readyQueue.add(mBlock); + } + } + + void carryOutDecomposition(Bag bag) { + if (DEBUG) { + System.out.print("carryOutDecomposition:" + this); + } + + for (Block inbound: inbounds) { + if (DEBUG) { + System.out.println("inbound = " + inbound); + } + MBlock mBlock = mBlockCache.get(inbound.component); + if (mBlock == null) { + System.out.println("inbound mBlock is null, block = " + inbound); + continue; + } + + Bag subBag = currentBag.addNestedBag( + mBlock.endorser.vertexSet); + Separator separator = + currentBag.addSeparator(inbound.separator); + + separator.incidentBags.add(bag); + separator.incidentBags.add(subBag); + + bag.incidentSeparators.add(separator); + subBag.incidentSeparators.add(separator); + mBlock.endorser.carryOutDecomposition(subBag); + } + } + + private VertexSet inletsInduced() { + VertexSet result = new VertexSet(g.n); + for (Block b : inbounds) { + result.or(b.separator); + } + return result; + } + + public String toString() { + + StringBuilder sb = new StringBuilder(); + sb.append("PMC"); + if (isValid) { + sb.append("(valid):\n"); + } + else { + sb.append("(invalid):\n"); + } + sb.append(" sep : " + vertexSet + "\n"); + sb.append(" outbound: " + outbound + "\n"); + + for (Block b : inbounds) { + sb.append(" inbound : " + b + "\n"); + } + return sb.toString(); + } + } + + int numberOfEnabledBlocks() { + return mBlockCache.size(); + } + + void dumpPendings() { + System.out.println("pending endorsers\n"); + for (PMC endorser : pendingEndorsers) { + System.out.print(endorser); + } + } + + void log(String logHeader) { + if (VERBOSE) { + log(logHeader, System.out); + } + if (logFile != null) { + PrintStream ps; + try { + ps = new PrintStream(new FileOutputStream(logFile, true)); + + log(logHeader, ps); + ps.close(); + } catch (FileNotFoundException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } + } + } + + void log(String logHeader, PrintStream ps) { + ps.print(logHeader); + if (timer != null) { + long time = timer.getTime(); + ps.print(", time = " + time); + } + ps.println(); + + int sizes[] = tBlockSieve.getSizes(); + + ps.print("n = " + g.n + " width = " + targetWidth + ", tBlocks = " + + tBlockCache.size() + Arrays.toString(sizes)); + ps.print(", endorsed = " + mBlockCache.size()); + ps.print(", pendings = " + pendingEndorsers.size()); +// ps.print(", processed = " + processed.size()); + ps.println(", blocks = " + blockCache.size()); + } + + private static void count() { + /* + String path = "random"; + + String[] instances = { +// "gnm_20_40_1,6", + "gnm_20_60_1,8", + "gnm_20_80_1,11", + "gnm_20_100_1,11", +// "gnm_30_60_1,7", + "gnm_30_90_1,11", + "gnm_30_120_1,14", + "gnm_30_150_1,16", + "gnm_40_80_1,8", + "gnm_40_120_1,14", + "gnm_40_160_1,18", + "gnm_40_200_1,20", +// "gnm_50_100_1,10", + "gnm_50_150_1,16", + "gnm_50_200_1,20", + "gnm_50_250_1,24", + }; + + for (String instance: instances) { + String[] s = instance.split(","); + String name = s[0]; + int width = Integer.parseInt(s[1]); + + Graph g = Graph.readGraph("instance/" + path, name); + + System.out.println("Graph " + name + " read"); + + Bag rootBag = new Bag(g); + + rootBag.initializeForDecomposition(); + + MTDecomposerHeuristic dec = new MTDecomposerHeuristic(rootBag, + width, width, null, null); + + dec.counting = true; + + dec.decompose(); + +// dec.checkAgainstDecompositionInFile("result/" + path, name); + + System.out.println(name + " decomposed, flattening.. "); + + rootBag.flatten(); + rootBag.setWidth(); + + System.out.println(name + " solved with width " + + rootBag.width + " with " + + rootBag.nestedBags.size() + " bags"); + + TreeDecomposition td = rootBag.toTreeDecomposition(); +// td.writeTo(System.out); + td.validate(); + // td.analyze(1); + } + */ + } + private static void test() { + /* +// String path = "coloring_gr2"; + // String path = "coloring-targets"; + // String name = "queen5_5"; +// String name = "queen6_6"; +// String name = "queen7_7"; +// String name = "queen8_8"; +// String name = "queen9_9"; + // String name = "queen8_12"; + // String name = "queen10_10"; +// String name = "mulsol.i.1"; + // String name = "mulsol.i.2"; +// String name = "anna"; +// String name = "david"; +// String name = "huck"; +// String name = "homer"; +// String name = "jean"; +// String name = "inithx.i.1_pp"; +// String name = "inithx.i.2_pp"; +// String name = "dimacs_inithx.i.2-pp"; + // String name = "dimacs_inithx.i.3-pp"; + // String name = "fpsol2.i.1"; +// String name = "fpsol2.i.2_pp"; +// String name = "mulsol.i.1_pp"; +// String name = "mulsol.i.2_pp"; + // String name = "dimacs_mulsol.i.2"; + +// String name = "mulsol.i.2_pp"; + // String name = "fpsol2.i.2"; + // String name = "le450_5a"; + // String name = "le450_5b"; + // String name = "myciel3"; +// String name = "myciel4"; +// String name = "myciel5"; +// String name = "myciel6"; + // String name = "myciel7"; +// String name = "anna"; +// String path = "grid"; +// String name = "troidal3_3"; +// String name = "troidal4_4"; +// String name = "troidal5_5"; +// String name = "troidal6_6"; +// String name = "troidal7_7"; +// String path = "random"; +// String name = "gnm_50_250_1"; + // String path = "pace16/100"; + // String name = "dimacs_zeroin.i.3-pp"; + // String path = "pace16/1000"; + // String name = "4x12_torusGrid"; + // String name = "RandomBipartite_25_50_3"; + // String name = "RKT_300_75_30_0"; + // String name = "RandomBoundedToleranceGraph_80"; + // String path = "pace16/3600"; + // String name = "8x6_torusGrid"; + // String path = "pace16/unsolved"; + // String name = "6s10.gaifman"; +// String path = "test"; +// String name = "test1"; +// String name = "test2"; +// String name = "test3"; +// String name = "test4"; +// String name = "test5"; +// String name = "test6"; +// String path = "ex2017public"; +// String name = "ex001"; +// String name = "ex003"; +// String name = "ex005"; +// String name = "ex047"; +// String name = "ex129"; +// String name = "ex135"; +// String name = "ex153"; +// String name = "ex175"; + String path = "he_temp3"; +// String name = "he075_3"; +// String name = "he085_3"; +// String name = "he091_3"; + String name = "he095_3"; + + Graph g = Graph.readGraph("instance/" + path, name); + + System.out.println("Graph " + name + " read"); + // for (int v = 0; v < g.n; v++) { + // System.out.println(v + ": " + g.degree[v] + ", " + g.neighborSet[v]); + // } + + long t0 = System.currentTimeMillis(); + Bag rootBag = new Bag(g); + + rootBag.initializeForDecomposition(); + + MTDecomposerHeuristic dec = new MTDecomposerHeuristic(rootBag, + g.minDegree(), g.n - 1, null, null); +// g.minDegree(), 50, null, null); + + dec.setMaxMultiplicity(1); + + dec.decompose(); + +// dec.checkAgainstDecompositionInFile("result/" + path, name); + + System.out.println(name + " decomposed, flattening.. "); + + rootBag.flatten(); + rootBag.setWidth(); + + long t = System.currentTimeMillis(); + System.out.println(name + " solved with width " + + rootBag.width + " with " + + rootBag.nestedBags.size() + " bags in " + (t - t0) + " millisecs"); + + TreeDecomposition td = rootBag.toTreeDecomposition(); +// td.writeTo(System.out); + td.validate(); + // td.analyze(1); + File outFile = new File("result/" + path + "/" + name + ".td"); + PrintStream ps; + try { + ps = new PrintStream(new FileOutputStream(outFile)); + ps.println("c width = " + td.width + ", time = " + (t - t0)); + td.writeTo(ps); + ps.close(); + } catch (FileNotFoundException e) { + // TODO Auto-generated catch block + e.printStackTrace(); + } + */ + } + + static class MinComparator implements Comparator { + @Override + public int compare(VertexSet o1, VertexSet o2) { + return o1.nextSetBit(0) - o2.nextSetBit(0); + } + } + + public static void main(String args[]) { + //test(); +// count(); + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/MainDecomposer.java b/solvers/TCS-Meiji/tw/heuristic/MainDecomposer.java new file mode 100644 index 0000000..d990451 --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/MainDecomposer.java @@ -0,0 +1,895 @@ +/* + * Copyright (c) 2017, Hiromu Ohtsuka +*/ + +package tw.heuristic; + +import java.io.File; +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.PrintStream; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Comparator; +import java.util.PriorityQueue; +import java.util.Queue; +import java.util.LinkedList; +import java.util.Set; +import java.util.HashSet; +import java.util.Random; +import java.util.Collections; + +public class MainDecomposer{ + public static enum Mode{ + greedy, pathDecomposition, treeDecomposition + } + public static final long MAX_TIME = 1800000; + public static final long INITIAL_TIME_MS = 4000; + public static final int MAX_MULTIPLICITY = 1; + public static final long CUT_D_TIME_MS = 300000; + public static final long DETECT_TIME_MS = 10000; + + private static Random random; + private static Graph wholeGraph; + private static TreeDecomposition best; + private static int[][] invs; + private static Bag[] bags; + private static long detectSum; + private static long startTime; + private static int print_bag_below; + + private static final boolean DEBUG = false; + + private static int countGD, countPD, countTD; + + private static final Comparator< Bag > WIDTH_DESCENDING_ORDER = + new Comparator< Bag >(){ + @Override + public int compare(Bag b1, Bag b2){ + return -(Integer.compare(b1.getWidth(), b2.getWidth())); + } + }; + + public static TreeDecomposition getBestTreeDecompositionSoFar(){ + return best; + } + + private static void commit(){ + if(bags == null){ + return; + } + + if(DEBUG){ + comment("commit"); + } + + Bag[] copiedBags = new Bag[bags.length]; + for(int i = 0; i < copiedBags.length; i++){ + copiedBags[i] = (Bag)bags[i].clone(); + } + + if(bags.length == 1){ + // trivial tree decomposition + if(copiedBags[0].nestedBags == null || copiedBags[0].nestedBags.isEmpty()){ + TreeDecomposition trivial = + new TreeDecomposition(0, copiedBags[0].graph.n - 1, copiedBags[0].graph); + trivial.addBag(copiedBags[0].graph.all.toArray()); + if(best == null || trivial.width < best.width){ + best = trivial; + comment("width = " + best.width); + printTime(); + if(best.width <= print_bag_below) { + TreeDecomposition result = getBestTreeDecompositionSoFar(); + result.writeTo(System.out); + System.out.print("=\n"); + } + } + return; + } + + copiedBags[0].flatten(); + TreeDecomposition td = copiedBags[0].toTreeDecomposition(); + setWidth(td); + + if(best == null || td.width < best.width){ + best = td; + comment("width = " + best.width); + printTime(); + if(best.width <= print_bag_below) { + TreeDecomposition result = getBestTreeDecompositionSoFar(); + result.writeTo(System.out); + System.out.print("=\n"); + } + } + + return; + } + + TreeDecomposition td = new TreeDecomposition(0, 0, wholeGraph); + for(int i = 0; i < copiedBags.length; i++){ + // trivial tree decomposition + if(copiedBags[i].nestedBags == null || copiedBags[i].nestedBags.isEmpty()){ + TreeDecomposition trivial = + new TreeDecomposition(0, copiedBags[i].graph.n - 1, copiedBags[i].graph); + trivial.addBag(copiedBags[i].graph.all.toArray()); + td.combineWith(trivial, invs[i], null); + } + else{ + copiedBags[i].flatten(); + td.combineWith(copiedBags[i].toTreeDecomposition(), invs[i], null); + } + } + setWidth(td); + + if(best == null || td.width < best.width){ + best = td; + comment("width = " + best.width); + printTime(); + if(best.width <= print_bag_below) { + TreeDecomposition result = getBestTreeDecompositionSoFar(); + result.writeTo(System.out); + System.out.print("=\n"); + } + } + } + + private static void setWidth(TreeDecomposition td){ + if(td == null){ + return; + } + int width = -1; + for(int i = 1; i <= td.nb; i++){ + width = Math.max(width, td.bags[i].length - 1); + } + td.width = width; + } + + private static void comment(String comment){ + System.out.println("c " + comment); + } + + private static void printTime(){ + comment("time = " + (System.currentTimeMillis() - startTime) + " ms"); + } + + private static void initializeForDecomposition(Graph graph, long seed){ + wholeGraph = graph; + best = null; + bags = null; + invs = null; + detectSum = 0; + random = new Random(seed); + startTime = System.currentTimeMillis(); + + // trivial tree decomposition + best = new TreeDecomposition(0, wholeGraph.n - 1, wholeGraph); + best.addBag(wholeGraph.all.toArray()); + + if(DEBUG){ + comment("seed = " + seed); + } + } + + public static TreeDecomposition decompose(Graph graph, long seed){ + initializeForDecomposition(graph, seed); + + if(graph.n == 0){ + best = new TreeDecomposition(0, -1, graph); + return best; + } + + ArrayList< VertexSet > components = graph.getComponents(new VertexSet()); + + int nc = components.size(); + + if(nc == 1){ + if(graph.n <= 2){ + best = new TreeDecomposition(0, graph.n - 1, graph); + best.addBag(graph.all.toArray()); + return best; + } + + bags = new Bag[1]; + bags[0] = new Bag(graph); + + if(decomposeWithSmallCuts(bags[0])){ + commit(); + } + + if(bags[0].countSafeSeparators() == 0){ + decomposeGreedy(bags[0]); + } + else{ + for(Bag nb : bags[0].nestedBags){ + nb.makeRefinable(); + decomposeGreedy(nb); + } + bags[0].flatten(); + } + + commit(); + + while(!bags[0].optimal){ + improveWithSeparators(bags[0], bags[0].getWidth()); + commit(); + bags[0].flatten(); + } + + return getBestTreeDecompositionSoFar(); + } + + Graph[] graphs = new Graph[nc]; + invs = new int[nc][]; + for(int i = 0; i < nc; i++){ + VertexSet component = components.get(i); + graphs[i] = new Graph(component.cardinality()); + invs[i] = new int[graphs[i].n]; + int[] conv = new int[graph.n]; + int k = 0; + for(int v = 0; v < graph.n; v++){ + if(component.get(v)){ + conv[v] = k; + invs[i][k] = v; + ++k; + } + else{ + conv[v] = -1; + } + } + graphs[i].inheritEdges(graph, conv, invs[i]); + } + + bags = new Bag[nc]; + for(int i = 0; i < nc; i++){ + bags[i] = new Bag(graphs[i]); + } + + commit(); + + for(int i = 0; i < nc; i++){ + decomposeWithSmallCuts(bags[i]); + } + + commit(); + + for(int i = 0; i < nc; i++){ + if(bags[i].countSafeSeparators() == 0){ + decomposeGreedy(bags[i]); + } + else{ + for(Bag nb : bags[i].nestedBags){ + nb.makeRefinable(); + decomposeGreedy(nb); + } + bags[i].flatten(); + } + } + + commit(); + + PriorityQueue< Bag > queue = + new PriorityQueue< >(nc, WIDTH_DESCENDING_ORDER); + + for(int i = 0; i < nc; i++){ + queue.offer(bags[i]); + } + + while(!queue.isEmpty()){ + Bag b = queue.poll(); + improveWithSeparators(b, b.getWidth()); + commit(); + b.flatten(); + if(!b.optimal){ + queue.offer(b); + } + } + + return getBestTreeDecompositionSoFar(); + } + + private static void improveWithSeparators(Bag bag, int k){ + if(bag.parent != null){ + bag.makeLocalGraph(); + } + + if(bag.getWidth() <= k - 1){ + return; + } + + if(bag.separators == null){ + improve(bag, k); + return; + } + + if(bag.countSafeSeparators() > 0){ + bag.pack(); + for(Bag b : bag.nestedBags){ + improveWithSeparators(b, k); + } + return; + } + + if(detectSum < DETECT_TIME_MS){ + detectSum += bag.detectSafeSeparators(DETECT_TIME_MS - detectSum); + } + + if(bag.countSafeSeparators() == 0){ + improve(bag, k); + } + else{ + bag.pack(); + for(Bag b : bag.nestedBags){ + improveWithSeparators(b, k); + } + } + } + + private static boolean improve(Bag bag, int k){ + if(bag.parent != null){ + bag.makeLocalGraph(); + } + + if(bag.getWidth() <= k - 1){ + return true; + } + + if(bag.nestedBags == null){ + tryDecomposeExactly(bag, bag.graph.minDegree(), k - 1, k - 1); + return bag.getWidth() <= k - 1; + } + + while(bag.getWidth() >= k){ + Bag maxBag = null; + for(Bag nb : bag.nestedBags){ + if(maxBag == null || nb.size > maxBag.size){ + maxBag = nb; + } + } + long timeMS = INITIAL_TIME_MS; + int gdVS = maxBag.size, pdVS = maxBag.size, tdVS = maxBag.size; + int count = 0; + while(true){ + if(DEBUG){ + comment("timeMS = " + timeMS); + comment("gdVS = " + gdVS); + comment("pdVS = " + pdVS); + comment("tdVS = " + tdVS); + comment("countGD = " + countGD); + comment("countPD = " + countPD); + comment("countTD = " + countTD); + } + gdVS = tryImproveWith(Mode.greedy, + bag, maxBag, timeMS, gdVS, 10, 3); + if(gdVS < 0){ + ++countGD; + break; + } + pdVS = tryImproveWith(Mode.pathDecomposition, + bag, maxBag, timeMS, pdVS, 30, 2); + if(pdVS < 0){ + ++countPD; + break; + } + tdVS = tryImproveWith(Mode.treeDecomposition, + bag, maxBag, timeMS, tdVS, 3, 3); + if(tdVS < 0){ + ++countTD; + break; + } + refresh(bag, maxBag, Math.max(gdVS, Math.max(pdVS, tdVS)) + 30); + break; + } + } + + return true; + } + + private static void searchBagsToImproveLikeTree(Bag bag, Separator from, int max, + int targetWidth, ArrayList< Separator > separatorsToCheck){ + Set< Bag > visitedBags = new HashSet< >(); + VertexSet vs = new VertexSet(); + + visitedBags.add(bag); + vs.or(bag.vertexSet); + + collectSubsetBags(visitedBags, vs); + + while(vs.cardinality() < max){ + if(!choiceBagAtRandom(visitedBags, vs)){ + break; + } + } + + collectBagsConnectingLargeSeparator(visitedBags, vs, targetWidth, 0); + collectSeparatorsTocheck(visitedBags, separatorsToCheck); + } + + private static void searchBagsToImproveLikePath(Bag bag, Separator from, int max, + int targetWidth, ArrayList< Separator > separatorsToCheck){ + Set< Bag > visitedBags = new HashSet< >(); + VertexSet vs = new VertexSet(); + + visitedBags.add(bag); + vs.or(bag.vertexSet); + + collectBagsLikePath(bag, visitedBags, vs, 4 * max / 5); + + while(vs.cardinality() < max){ + if(!choiceBagAtRandom(visitedBags, vs)){ + break; + } + } + + collectBagsConnectingLargeSeparator(visitedBags, vs, targetWidth, 0); + collectSeparatorsTocheck(visitedBags, separatorsToCheck); + } + + private static void collectBagsLikePath( + Bag bag, Set< Bag > visitedBags, VertexSet vs, int max){ + Bag s = bag, t = bag; + while(vs.cardinality() < max){ + ArrayList< Bag > tBags = new ArrayList< >(); + for(Separator is : t.incidentSeparators){ + for(Bag ib : is.incidentBags){ + if(ib != t && ib != s && !visitedBags.contains(ib)){ + tBags.add(ib); + } + } + } + if(!tBags.isEmpty()){ + t = tBags.get(random.nextInt(tBags.size())); + visitedBags.add(t); + vs.or(t.vertexSet); + } + + if(vs.cardinality() >= max){ + break; + } + + ArrayList< Bag > sBags = new ArrayList< >(); + for(Separator is : s.incidentSeparators){ + for(Bag ib : is.incidentBags){ + if(ib != s && ib != t && !visitedBags.contains(ib)){ + sBags.add(ib); + } + } + } + if(!sBags.isEmpty()){ + s = sBags.get(random.nextInt(sBags.size())); + visitedBags.add(s); + vs.or(s.vertexSet); + } + + if(tBags.isEmpty() && sBags.isEmpty()){ + break; + } + } + + collectSubsetBags(visitedBags, vs); + } + + private static boolean choiceBagAtRandom(Set< Bag > visitedBags, VertexSet vs){ + ArrayList< Bag > outers = new ArrayList< >(); + for(Bag b : visitedBags){ + for(Separator is : b.incidentSeparators){ + for(Bag nb : is.incidentBags){ + if(nb != b && !visitedBags.contains(nb)){ + outers.add(nb); + } + } + } + } + if(!outers.isEmpty()){ + Bag bag = outers.get(random.nextInt(outers.size())); + visitedBags.add(bag); + vs.or(bag.vertexSet); + collectSubsetBags(visitedBags, vs); + return true; + } + return false; + } + + private static void collectSubsetBags(Set< Bag > visitedBags, VertexSet vs){ + ArrayList< Bag > toVisited = new ArrayList< >(); + while(true){ + for(Bag b : visitedBags){ + for(Separator is : b.incidentSeparators){ + for(Bag ib : is.incidentBags){ + if(ib != b && !visitedBags.contains(ib) + && ib.vertexSet.isSubset(b.vertexSet)){ + toVisited.add(ib); + } + } + } + } + if(toVisited.isEmpty()){ + return; + } + for(Bag b : toVisited){ + visitedBags.add(b); + vs.or(b.vertexSet); + } + toVisited.clear(); + } + } + + private static void collectBagsConnectingLargeSeparator( + Set< Bag > visitedBags, VertexSet vs, int targetWidth, int d){ + ArrayList< Bag > toVisited = new ArrayList< >(); + while(true){ + for(Bag b : visitedBags){ + for(Separator is : b.incidentSeparators){ + if(Math.abs(targetWidth - is.size) <= d){ + for(Bag ib : is.incidentBags){ + if(!visitedBags.contains(ib)){ + toVisited.add(ib); + } + } + } + } + } + if(toVisited.isEmpty()){ + return; + } + for(Bag b : toVisited){ + visitedBags.add(b); + vs.or(b.vertexSet); + collectSubsetBags(visitedBags, vs); + } + toVisited.clear(); + } + } + + private static void collectBagsFormingStar( + Set< Bag > visitedBags, VertexSet vs){ + ArrayList< Bag > toVisited = new ArrayList< >(); + while(true){ + for(Bag b : visitedBags){ + for(Separator is : b.incidentSeparators){ + // star + if(is.incidentBags.size() >= 3){ + for(Bag ib : is.incidentBags){ + if(!visitedBags.contains(ib)){ + toVisited.add(ib); + } + } + } + } + } + if(toVisited.isEmpty()){ + return; + } + for(Bag b : toVisited){ + visitedBags.add(b); + vs.or(b.vertexSet); + collectSubsetBags(visitedBags, vs); + } + toVisited.clear(); + } + } + + private static void collectSeparatorsTocheck( + Set< Bag > visitedBags, ArrayList< Separator > separatorsToCheck){ + for(Bag b : visitedBags){ + for(Separator is : b.incidentSeparators){ + for(Bag ib : is.incidentBags){ + if(ib != b && !visitedBags.contains(ib)){ + separatorsToCheck.add(is); + break; + } + } + } + } + } + + private static Bag findBagContaining(Bag bag, Bag whole){ + if(bag == whole){ + return whole; + } + + for(Bag nb : whole.nestedBags){ + if(nb == bag){ + return nb; + } + if(nb.nestedBags == null){ + continue; + } + for(Bag b : nb.nestedBags){ + if(b == bag){ + return nb; + } + } + } + + return null; + } + + private static void decomposeGreedy(Bag bag){ + bag.initializeForDecomposition(); + GreedyDecomposer mfd = new GreedyDecomposer(bag); + mfd.decompose(); + } + + private static boolean decomposeWithSmallCuts(Bag bag){ + bag.initializeForDecomposition(); + CutDecomposer cd = new CutDecomposer(bag); + cd.decompose(CUT_D_TIME_MS); + if(DEBUG){ + comment("finish cut decompose"); + } + return bag.nestedBags != null && !bag.nestedBags.isEmpty(); + } + +/* + private static void decomposeGreedyWithSmallCuts(Bag bag){ + bag.initializeForDecomposition(); + CutDecomposer cd = new CutDecomposer(bag); + cd.decompose(); + + // [TODO] + // commit(); + + if(DEBUG){ + comment("finish cut decompose"); + } + + if(bag.countSafeSeparators() == 0){ + GreedyDecomposer gd = new GreedyDecomposer(bag); + gd.decompose(); + } + else{ + for(Bag nb : bag.nestedBags){ + nb.makeRefinable(); + GreedyDecomposer gd = new GreedyDecomposer(nb); + gd.decompose(); + } + } + + bag.flatten(); + } + */ + + private static void tryDecomposeExactly(Bag bag, int lowerBound, int upperBound, int targetWidth){ + if(lowerBound > upperBound){ + return; + } + + Bag triedBag = (Bag)bag.clone(); + + if(triedBag.parent != null){ + triedBag.makeLocalGraph(); + } + + decomposeGreedy(triedBag); + if(triedBag.getWidth() <= targetWidth){ + replace(triedBag, bag); + return; + } + + triedBag.initializeForDecomposition(); + MTDecomposerHeuristic mtd = new MTDecomposerHeuristic( + triedBag, lowerBound, upperBound, null, null, MAX_TIME); + mtd.setMaxMultiplicity(MAX_MULTIPLICITY); + if(!mtd.decompose()){ + return; + } + + if(triedBag.getWidth() <= targetWidth){ + replace(triedBag, bag); + } + } + + private static int tryImproveWith(Mode mode, + Bag whole, Bag maxBag, long time, int vsSize, int cycle, int d){ + if(DEBUG){ + comment("mode = " + mode); + comment("cycle = " + cycle); + comment("d = " + d); + comment("timeLimit = " + time); + } + + int k = whole.getWidth(); + int targetSize = vsSize; + int count = 0; + long sum = 0; + while(true){ + if(DEBUG){ + comment("k = " + k); + comment("vs = " + targetSize); + comment("count = " + count); + comment("sum = " + sum); + } + + ArrayList< Separator > separatorsToCheck = new ArrayList< >(); + switch(mode){ + case greedy : case treeDecomposition : + searchBagsToImproveLikeTree( + maxBag, null, targetSize, k - 1, separatorsToCheck); + break; + case pathDecomposition : + searchBagsToImproveLikePath( + maxBag, null, targetSize, k - 1, separatorsToCheck); + break; + } + for(Separator s : separatorsToCheck){ + s.wall = true; + } + if(!separatorsToCheck.isEmpty()){ + whole.pack(); + } + + Bag target; + if(!separatorsToCheck.isEmpty()){ + target = findBagContaining(maxBag, whole); + } + else{ + target = whole; + } + + if(DEBUG){ + comment("targetSize = " + target.size); + } + + if(target.parent != null){ + target.makeLocalGraph(); + } + + Bag triedBag = (Bag)target.clone(); + triedBag.initializeForDecomposition(); + boolean success = false; + switch(mode){ + case greedy : + GreedyDecomposer gd = new GreedyDecomposer(triedBag); + if(gd.decompose(time - sum)){ + success = true; + } + sum += gd.getTimeMS(); + break; + + case pathDecomposition : + PathDecomposer pd = new PathDecomposer(triedBag, + triedBag.graph.minDegree(), k - 1); + if(pd.decompose(time - sum)){ + success = true; + } + sum += pd.getTimeMS(); + break; + + case treeDecomposition : + MTDecomposerHeuristic mtd = new MTDecomposerHeuristic( + triedBag, triedBag.graph.minDegree(), k - 1, null, null, time - sum); + mtd.setMaxMultiplicity(MAX_MULTIPLICITY); + if(mtd.decompose()){ + success = true; + } + sum += mtd.getTimeMS(); + break; + } + + for(Separator s : separatorsToCheck){ + s.wall = false; + } + + if(success && triedBag.getWidth() <= k - 1){ + replace(triedBag, target); + whole.flatten(); + return -1; + } + + if(!separatorsToCheck.isEmpty()){ + whole.flatten(); + } + + if(sum >= time){ + break; + } + + if(count % cycle == 0){ + targetSize += d; + } + + ++count; + } + + return targetSize; + } + + private static void refresh(Bag whole, Bag maxBag, int vsSize){ + int k = whole.getWidth(); + ArrayList< Separator > separatorsToCheck = new ArrayList< >(); + searchBagsToImproveLikeTree( + maxBag, null, vsSize, k - 1, separatorsToCheck); + + for(Separator s : separatorsToCheck){ + s.wall = true; + } + + if(!separatorsToCheck.isEmpty()){ + whole.pack(); + } + + Bag target; + if(!separatorsToCheck.isEmpty()){ + target = findBagContaining(maxBag, whole); + } + else{ + target = whole; + } + + if(target.parent != null){ + target.makeLocalGraph(); + } + target.initializeForDecomposition(); + GreedyDecomposer gd = new GreedyDecomposer(target); + gd.decompose(); + + for(Separator s : separatorsToCheck){ + s.wall = false; + } + + whole.flatten(); + } + + private static void replace(Bag from, Bag to){ + to.graph = from.graph; + to.nestedBags = from.nestedBags; + to.separators = from.separators; + to.incidentSeparators = from.incidentSeparators; + + for(Bag b : to.nestedBags){ + b.parent = to; + } + for(Separator s : to.separators){ + s.parent = to; + } + } + + private MainDecomposer(){} + + public static void main(String[] args){ + Runtime.getRuntime().addShutdownHook(new Thread(){ + @Override + public void run(){ + TreeDecomposition result = getBestTreeDecompositionSoFar(); + if(result == null){ + comment("no solution"); + return; + } + //if(result.isValid(System.err)){ + comment("width = " + result.width); + printTime(); + result.writeTo(System.out); + //} + //if(result.isValid(System.err)){ + // comment("validation ok"); + //} + } + }); + + long seed = 42; + print_bag_below = -1; + if(args.length >= 2){ + if("-s".equals(args[0])){ + seed = Long.parseLong(args[1]); + } else if("-p".equals(args[0])){ + print_bag_below = Integer.parseInt(args[1]); + } + } + if(args.length >= 4){ + if("-s".equals(args[2])){ + seed = Long.parseLong(args[3]); + } else if("-p".equals(args[2])){ + print_bag_below = Integer.parseInt(args[3]); + } + } + + Graph graph = Graph.readGraph(System.in); + + comment("read Graph"); + + decompose(graph, seed); + + printTime(); + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/PathDecomposer.java b/solvers/TCS-Meiji/tw/heuristic/PathDecomposer.java new file mode 100644 index 0000000..850160f --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/PathDecomposer.java @@ -0,0 +1,267 @@ +/* + * Copyright (c) 2017, Hiromu Ohtsuka +*/ + +package tw.heuristic; + +import java.util.Set; +import java.util.HashSet; + +import java.util.Arrays; + +public class PathDecomposer{ + private Bag whole; + private Graph graph; + private int n; + private int lowerBound, upperBound; + + private int width; + private Set< VertexSet > failureTable; + private int[] separationSequence; + + private static final long STEPS_PER_MS = 1000; + private long TIME_LIMIT; + private long count; + private boolean abort; + + private static final boolean DEBUG = false; + + public PathDecomposer(Bag bag, + int lowerBound, int upperBound){ + this.whole = bag; + this.graph = bag.graph; + this.n = bag.graph.n; + this.lowerBound = lowerBound; + this.upperBound = upperBound; + + if(!graph.isConnected(graph.all)){ + System.err.println("graph must be connected"); + } + + assert(lowerBound <= upperBound); + } + + public PathDecomposer(Bag bag){ + this(bag, bag.graph.minDegree(), bag.graph.n - 1); + } + + public boolean decompose(long timeMS){ + abort = false; + count = 0; + TIME_LIMIT = STEPS_PER_MS * timeMS; + + failureTable = new HashSet< >(); + + boolean exist = false; + width = upperBound; + while(true){ + if(DEBUG){ + comment("currentwidth = " + width); + } + if(vsSearch(width, 0, new VertexSet())){ + makeSeparationSequence(); + exist = true; + } + else{ + ++width; + break; + } + if(width == lowerBound){ + break; + } + --width; + } + + if(DEBUG){ + comment("in path count = " + count); + } + + if(!exist || abort){ + return false; + } + + makePathDecompositionWithSeparationSequence(); + + if(DEBUG){ + validate(); + } + + return true; + } + + public boolean isAborted(){ + return abort; + } + + public long getTimeMS(){ + return count / STEPS_PER_MS; + } + + private boolean vsSearch(int w, int i, VertexSet vs){ + if(abort){ + return false; + } + + if(i == n){ + return true; + } + + if(failureTable.contains(vs)){ + return false; + } + + for(int v = 0; v < n; v++){ + if(abort){ + return false; + } + ++count; + if(count > TIME_LIMIT){ + abort = true; + } + if(vs.get(v)){ + continue; + } + int ns0 = graph.neighborSet(vs).cardinality(); + vs.set(v); + int ns = graph.neighborSet(vs).cardinality(); + if(ns > w){ + vs.clear(v); + continue; + } + if(ns <= ns0){ + return vsSearch(w, i + 1, vs); + } + if(vsSearch(w, i + 1, vs)){ + return true; + } + vs.clear(v); + } + + failureTable.add((VertexSet)vs.clone()); + return false; + } + + private void makeSeparationSequence(){ + separationSequence = new int[n]; + + VertexSet vs = new VertexSet(); + for(int i = 0; i < n; i++){ + for(int v = 0; v < n; v++){ + if(vs.get(v)){ + continue; + } + vs.set(v); + if(graph.neighborSet(vs).cardinality() <= width + && !failureTable.contains(vs)){ + separationSequence[i] = v; + break; + } + else{ + vs.clear(v); + } + } + } + } + + private void makePathDecompositionWithSeparationSequence(){ + VertexSet vs = new VertexSet(); + VertexSet vs1 = new VertexSet(); + Separator s0 = null; + for(int i = 0; i < n; i++){ + int v = separationSequence[i]; + vs.set(v); + + VertexSet ns = graph.neighborSet(vs); + VertexSet bvs = ns.unionWith(new VertexSet(new int[]{v})); + + if(s0 != null && bvs.isSubset(s0.vertexSet)){ + continue; + } + + Bag b = whole.addNestedBag(bvs); + + if(s0 != null){ + s0.vertexSet.and(b.vertexSet); + s0.addIncidentBag(b); + b.addIncidentSeparator(s0); + } + + vs1.or(bvs); + if(vs1.equals(graph.all)){ + break; + } + + Separator s = whole.addSeparator(ns); + b.addIncidentSeparator(s); + s.addIncidentBag(b); + + s0 = s; + } + } + + private static void comment(String comment){ + System.out.println("c " + comment); + } + + private void validate(){ + if(DEBUG){ + whole.validate(); + + for(Separator s : whole.separators){ + // path + assert(s.incidentBags.size() == 2); + } + + for(Separator s : whole.separators){ + for(Bag ib : s.incidentBags){ + // not redundant + assert(!ib.vertexSet.equals(s.vertexSet)); + assert(s.vertexSet.isSubset(ib.vertexSet)); + } + } + + for(Separator s : whole.separators){ + assert(graph.getComponents(s.vertexSet).size() >= 2); + } + } + } + + private static void randomTest(){ + int c = 100; + for(int i = 0; i < c; i++){ + Graph graph = Graph.randomGraph(100, 800, i); + + if(!graph.isConnected(graph.all)){ + continue; + } + + Bag whole = new Bag(graph); + whole.initializeForDecomposition(); + PathDecomposer pd = new PathDecomposer(whole); + pd.decompose(10); + + TreeDecomposition path = whole.toTreeDecomposition(); + + if(!path.isValid(System.err)){ + System.err.println("invalid solution"); + return; + } + + path.writeTo(System.out); + } + } + + public static void main(String[] args){ + Graph graph = Graph.readGraph(System.in); + + Bag whole = new Bag(graph); + whole.initializeForDecomposition(); + PathDecomposer pd = new PathDecomposer(whole); + pd.decompose(5); + + TreeDecomposition path = whole.toTreeDecomposition(); + path.writeTo(System.out); + + //randomTest(); + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/SafeSeparator.java b/solvers/TCS-Meiji/tw/heuristic/SafeSeparator.java new file mode 100644 index 0000000..eb51973 --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/SafeSeparator.java @@ -0,0 +1,657 @@ +/* + * Copyright (c) 2017, Hisao Tamaki and Keitaro Makii +*/ + +package tw.heuristic; + +import java.util.ArrayList; +import java.util.Arrays; + + + +public class SafeSeparator { + private static int MAX_MISSINGS = 100; + private static int DEFAULT_MAX_STEPS = 10000; + private static final boolean CONFIRM_MINOR = true; +// private static final boolean CONFIRM_MINOR = false; +// private static final boolean DEBUG = true; + private static final boolean DEBUG = false; + + Graph g; + + int maxSteps; + int steps; + LeftNode[] leftNodes; + ArrayList rightNodeList; + ArrayList missingEdgeList; + VertexSet available; + + public SafeSeparator (Graph g) { + this.g = g; + } + + public boolean isOneWaySafe(VertexSet separator, VertexSet component) { + return isOneWaySafe(separator, component, DEFAULT_MAX_STEPS); + } + + public boolean isOneWaySafe(VertexSet separator, VertexSet component, int maxSteps) { + try { + return isOneWaySafeCounting(separator, component, maxSteps); + } + catch (StepsExceededException e) { + return false; + } + } + public boolean isOneWaySafeCounting(VertexSet separator, VertexSet component, int maxSteps) + throws StepsExceededException { + // System.out.println("isSafeSeparator " + separator); + this.maxSteps = maxSteps; + steps = 0; + ArrayList components = g.getComponents(separator); + if (components.size() == 1) { + // System.err.println("non separator for safety testing:" + separator); + // throw new RuntimeException("non separator for safety testing:" + separator); + return false; + } + if (countMissings(separator) > MAX_MISSINGS) { + return false; + } + for (VertexSet compo: components) { + if (compo.equals(component)) { + continue; + } + VertexSet sep = g.neighborSet(compo); + VertexSet rest = g.all.subtract(sep).subtract(compo); + VertexSet[] contracts = findCliqueMinor(sep, rest); + if (contracts == null) { + return false; + } + if (CONFIRM_MINOR) { + confirmCliqueMinor(sep, rest, contracts); + } + } + return true; + } + + private void addSteps(int s) + throws StepsExceededException { + steps += s; + if (steps > maxSteps) { + throw new StepsExceededException(); + } + } + + public int decideSafeness(VertexSet separator) { + return decideSafeness(separator, DEFAULT_MAX_STEPS); + } + + public int decideSafeness(VertexSet separator, int maxSteps) { + try { + boolean b = isSafeSeparatorCounting(separator, maxSteps); + if (b) { + return steps + 1; + } + else { + return -(steps + 1); + } + } catch (StepsExceededException e) { + return -(steps + 1); + } + } + + public boolean isSafeSeparator(VertexSet separator) { + return isSafeSeparator(separator, DEFAULT_MAX_STEPS); + } + + public boolean isSafeSeparator(VertexSet separator, int maxSteps) { + try { + return isSafeSeparatorCounting(separator, maxSteps); + } catch (StepsExceededException e) { + return false; + } + } + + public boolean isSafeSeparatorCounting(VertexSet separator, int maxSteps) + throws StepsExceededException { + // System.out.println("isSafeSeparator " + separator); + this.maxSteps = maxSteps; + steps = 0; + if(separator.cardinality() <= 2){ + return true; + } + if(separator.cardinality() == 3){ + int first = separator.nextSetBit(0); + VertexSet s = g.neighborSet[first]; + if(s.intersects(separator)){ + return true; + } + } + ArrayList components = g.getComponents(separator); + if (components.size() == 1) { + // System.err.println("non separator for safety testing:" + separator); + // throw new RuntimeException("non separator for safety testing:" + separator); + return false; + } + if (countMissings(separator) > MAX_MISSINGS) { + return false; + } + for (VertexSet compo: components) { + VertexSet sep = g.neighborSet(compo); + VertexSet rest = g.all.subtract(sep).subtract(compo); + VertexSet[] contracts = findCliqueMinor(sep, rest); + if (contracts == null) { + return false; + } + if (CONFIRM_MINOR) { + confirmCliqueMinor(sep, rest, contracts); + } + } + return true; + } + + private class LeftNode { + int index; + int vertex; + // ArrayList rightNeighborList; + // VertexSet rightNeighborSet; + + LeftNode(int index, int vertex) { + this.index = index; + this.vertex = vertex; + // rightNeighborList = new ArrayList<>(); + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("left" + index + "(" + vertex + "):"); + sb.append(", " + g.neighborSet[vertex]); + return sb.toString(); + } + } + + private class RightNode { + int index; + VertexSet vertexSet; + VertexSet neighborSet; + LeftNode assignedTo; + boolean printed; + + RightNode(int vertex) { + vertexSet = new VertexSet(g.n); + vertexSet.set(vertex); + neighborSet = g.neighborSet(vertexSet); + } + + RightNode(VertexSet vertexSet) { + this.vertexSet = vertexSet; + neighborSet = g.neighborSet(vertexSet); + } + + boolean potentiallyCovers(MissingEdge me) { + return + assignedTo == null && + neighborSet.get(me.left1.vertex) && + neighborSet.get(me.left2.vertex); + } + + boolean finallyCovers(MissingEdge me) { + return + assignedTo == me.left1 && + neighborSet.get(me.left2.vertex) || + assignedTo == me.left2 && + neighborSet.get(me.left1.vertex); + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("right" + index + ":" + vertexSet); + if (!printed) { + sb.append(", " + neighborSet); + } + if (assignedTo != null) { + sb.append("-> l" + assignedTo.index); + } + sb.append(", coveres {"); + for (MissingEdge me: missingEdgeList) { + if (this.potentiallyCovers(me)) { + sb.append("me" + me.index + " "); + } + } + printed = true; + sb.append("}"); + + return sb.toString(); + } + + } + + private class MissingEdge { + int index; + LeftNode left1; + LeftNode left2; + boolean unAugmentable; + + MissingEdge(LeftNode left1, LeftNode left2) { + this.left1 = left1; + this.left2 = left2; + } + + RightNode[] findCoveringPair() + throws StepsExceededException { + for (RightNode rn1: rightNodeList) { + if (rn1.neighborSet.get(left1.vertex) && + !rn1.neighborSet.get(left2.vertex)) { + for (RightNode rn2: rightNodeList) { + if (!rn2.neighborSet.get(left1.vertex) && + rn2.neighborSet.get(left2.vertex) && + connectable(rn1.vertexSet, rn2.vertexSet)) { + return new RightNode[]{rn1, rn2}; + } + } + } + } + return null; + } + + boolean isFinallyCovered() + throws StepsExceededException { + for (RightNode rn: rightNodeList) { + if (rn.finallyCovers(this)) { + return true; + } + } + return false; + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append("missing(" + left1.index + "," + + left2.index + "), covered by {"); + for (RightNode rn: rightNodeList) { + if (rn.potentiallyCovers(this)) { + sb.append("r" + rn.index + " "); + } + } + sb.append("}"); + return sb.toString(); + } + } + private VertexSet[] findCliqueMinor(VertexSet separator, VertexSet rest) + throws StepsExceededException { + int k = separator.cardinality(); + available = (VertexSet) rest.clone(); + leftNodes = new LeftNode[k]; + { + int i = 0; + for (int v = separator.nextSetBit(0); v >= 0; + v = separator.nextSetBit(v + 1)) { + leftNodes[i] = new LeftNode(i, v); + i++; + } + } + + missingEdgeList = new ArrayList<>(); + { + int i = 0; + for (int v = separator.nextSetBit(0); v >= 0; + v = separator.nextSetBit(v + 1)) { + int j = i + 1; + for (int w = separator.nextSetBit(v + 1); w >= 0; + w = separator.nextSetBit(w + 1)) { + if (!g.neighborSet[v].get(w)) { + missingEdgeList.add(new MissingEdge(leftNodes[i], leftNodes[j])); + } + j++; + } + i++; + } + } + + int m = missingEdgeList.size(); + + VertexSet[] result = new VertexSet[k]; + for (int i = 0; i < k; i++) { + result[i] = new VertexSet(g.n); + result[i].set(leftNodes[i].vertex); + } + + if (m == 0) { + return result; + } + +// System.out.println(m + " missings for separator size " + k + +// " and total components size " + rest.cardinality()); + for (int i = 0; i < m; i++) { + missingEdgeList.get(i).index = i; + } + + rightNodeList = new ArrayList<>(); + VertexSet ns = g.neighborSet(separator); + ns.and(rest); + + for (int v = ns.nextSetBit(0); v >= 0; + v = ns.nextSetBit(v + 1)) { + if (g.neighborSet[v].cardinality() == 1) { + continue; + } + boolean useless = true; + for (MissingEdge me: missingEdgeList) { + if (g.neighborSet[v].get(me.left1.vertex) || + g.neighborSet[v].get(me.left2.vertex)) { + useless = false; + } + } + if (useless) { + continue; + } + RightNode rn = new RightNode(v); + rightNodeList.add(rn); + available.clear(v); + } + + while (true) { + + MissingEdge zc = zeroCovered(); + if (zc == null) { + break; + } + RightNode[] coveringPair = zc.findCoveringPair(); + if (coveringPair != null) { + mergeRightNodes(coveringPair); + } + else { + return null; + } + } + + boolean moving = true; + while (rightNodeList.size() > k/2 && moving) { + steps++; + if (steps > maxSteps) { + return null; + } + moving = false; + MissingEdge lc = leastCovered(); + if (lc == null) { + break; + } + RightNode[] coveringPair = lc.findCoveringPair(); + if (coveringPair != null) { + mergeRightNodes(coveringPair); + moving = true; + } + else { + lc.unAugmentable = true; + } + } + + ArrayList temp = rightNodeList; + rightNodeList = new ArrayList<>(); + + for (RightNode rn: temp) { + boolean covers = false; + for (MissingEdge me: missingEdgeList) { + if (rn.potentiallyCovers(me)) { + covers = true; + break; + } + } + if (covers) { + rightNodeList.add(rn); + } + } + + int nRight = rightNodeList.size(); + for (int i = 0; i < nRight; i++) { + rightNodeList.get(i).index = i; + } + + if (DEBUG) { + System.out.println(k + " lefts"); + for (LeftNode ln: leftNodes) { + System.out.println(ln); + } + System.out.println(nRight + " rights"); + for (RightNode rn: rightNodeList) { + System.out.println(rn); + } + System.out.println(m + " missings"); + for (MissingEdge me: missingEdgeList) { + System.out.println(me); + } + } + + while (!missingEdgeList.isEmpty()) { + if (DEBUG) { + System.out.println(missingEdgeList.size() + " missings"); + for (RightNode rn: rightNodeList) { + System.out.println(rn); + } + } + int[] bestPair = null; + int maxMinCover = 0; + int maxFc = 0; + + for (LeftNode ln: leftNodes) { + for (RightNode rn: rightNodeList) { + if (rn.assignedTo != null || + !rn.neighborSet.get(ln.vertex)) { + continue; + } + rn.assignedTo = ln; + int minCover = minCover(); + int fc = 0; + for (MissingEdge me: missingEdgeList) { + if (me.isFinallyCovered()) { + fc++; + } + } + rn.assignedTo = null; + if (bestPair == null || minCover > maxMinCover) { + maxMinCover = minCover; + bestPair = new int[] {ln.index, rn.index}; + maxFc = fc; + } + else if (minCover == maxMinCover && fc > maxFc) { + bestPair = new int[] {ln.index, rn.index}; + maxFc = fc; + } + } + } + if (maxMinCover == 0) { + return null; + } + + if (DEBUG) { + System.out.println("maxMinCover = " + maxMinCover + + ", maxFC = " + maxFc + + ", bestPair = " + Arrays.toString(bestPair)); + + } + rightNodeList.get(bestPair[1]).assignedTo = + leftNodes[bestPair[0]]; + + ArrayList temp1 = missingEdgeList; + missingEdgeList = new ArrayList<>(); + for (MissingEdge me: temp1) { + if (!me.isFinallyCovered()) { + missingEdgeList.add(me); + } + } + } + + if (DEBUG) { + System.out.println("assignment success"); + for (RightNode rn: rightNodeList) { + System.out.println(rn); + } + } + + for (RightNode rn: rightNodeList) { + if (rn.assignedTo != null) { + int i = rn.assignedTo.index; + result[i].or(rn.vertexSet); + } + } + return result; + } + + void confirmCliqueMinor(VertexSet separator, VertexSet rest, VertexSet[] contracts) { + { + int i = 0; + for (int v = separator.nextSetBit(0); v >= 0; + v = separator.nextSetBit(v + 1)) { + if (!contracts[i].get(v)) { + throw new RuntimeException("Not a clique minor: vertex " + v + + " is not contained in the contracted " + contracts[i]); + } + i++; + } + } + for (int i = 0; i < contracts.length; i++) { + for (int j = i + 1; j < contracts.length; j++) { + if (contracts[i].intersects(contracts[j])) { + throw new RuntimeException("Not a clique minor: contracts " + + contracts[i] + " and " + contracts[j] + " intersect with each other"); + } + if (!g.neighborSet(contracts[i]).intersects(contracts[j])) { + throw new RuntimeException("Not a clique minor: contracts " + + contracts[i] + " and " + contracts[j] + " are not adjacent to each other"); + } + } + } + + for (int i = 0; i < contracts.length; i++) { + if (!g.isConnected(contracts[i])) { + throw new RuntimeException("Not a clique minor: contracted " + + contracts[i] + " is not connected"); + } + } + } + + int minCover() throws StepsExceededException { + int minCover = g.n; + for (MissingEdge me: missingEdgeList) { + if (me.isFinallyCovered()) { + continue; + } + int nCover = 0; + addSteps(1); + for (RightNode rn: rightNodeList) { + if (rn.potentiallyCovers(me)) { + nCover++; + } + } + if (nCover < minCover) { + minCover = nCover; + } + } + return minCover; + } + + MissingEdge leastCovered() throws StepsExceededException { + int minCover = 0; + MissingEdge result = null; + for (MissingEdge me: missingEdgeList) { + if (me.unAugmentable) { + continue; + } + int nCover = 0; + addSteps(1); + for (RightNode rn: rightNodeList) { + if (rn.potentiallyCovers(me)) { + nCover++; + } + } + if (result == null || nCover < minCover) { + minCover = nCover; + result = me; + } + } + return result; + } + + MissingEdge zeroCovered() throws StepsExceededException { + for (MissingEdge me: missingEdgeList) { + int nCover = 0; + addSteps(1); + for (RightNode rn: rightNodeList) { + if (rn.potentiallyCovers(me)) { + nCover++; + } + } + if (nCover == 0) { + return me; + } + } + return null; + } + + boolean connectable(VertexSet vs1, VertexSet vs2) + throws StepsExceededException { + VertexSet vs = (VertexSet) vs1.clone(); + while (true) { + addSteps(1); + VertexSet ns = g.neighborSet(vs); + if (ns.intersects(vs2)) { + return true; + } + ns.and(available); + if (ns.isEmpty()) { + return false; + } + vs.or(ns); + } + } + + void mergeRightNodes(RightNode[] coveringPair) { + RightNode rn1 = coveringPair[0]; + RightNode rn2 = coveringPair[1]; + + VertexSet connected = connect(rn1.vertexSet, rn2.vertexSet); + RightNode rn = new RightNode(connected); + rightNodeList.remove(rn1); + rightNodeList.remove(rn2); + rightNodeList.add(rn); + } + + VertexSet connect(VertexSet vs1, VertexSet vs2) { + ArrayList layerList = new ArrayList<>(); + + VertexSet vs = (VertexSet) vs1.clone(); + while (true) { + VertexSet ns = g.neighborSet(vs); + if (ns.intersects(vs2)) { + break; + } + ns.and(available); + layerList.add(ns); + vs.or(ns); + } + + VertexSet result = vs1.unionWith(vs2); + + VertexSet back = g.neighborSet(vs2); + for (int i = layerList.size() - 1; i >= 0; i--) { + VertexSet ns = layerList.get(i); + ns.and(back); + int v = ns.nextSetBit(0); + result.set(v); + available.clear(v); + back = g.neighborSet[v]; + } + return result; + } + + int countMissings(VertexSet s) { + int count = 0; + for (int v = s.nextSetBit(0); v >= 0; + v = s.nextSetBit(v + 1)) { + count += s.subtract(g.neighborSet[v]).cardinality() - 1; + } + return count / 2; + } + + private static class StepsExceededException extends Exception { + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/Separator.java b/solvers/TCS-Meiji/tw/heuristic/Separator.java new file mode 100644 index 0000000..d9c12c8 --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/Separator.java @@ -0,0 +1,212 @@ +/* + * Copyright (c) 2017, Hisao Tamaki and Hiromu Ohtsuka, Keitaro Makii +*/ + +package tw.heuristic; + +import java.util.ArrayList; +import java.util.Arrays; + +public class Separator implements Cloneable{ + Bag parent; + Graph graph; + VertexSet vertexSet; + int size; + ArrayList incidentBags; + boolean safe; + boolean unsafe; + boolean wall; + + int[] parentVertex; + int safeSteps; + + public Separator(Bag parent) { + this.parent = parent; + graph = parent.graph; + incidentBags = new ArrayList<>(); + } + + public Separator(Bag parent, VertexSet vertexSet) { + this(parent); + this.vertexSet = vertexSet; + size = vertexSet.cardinality(); + } + + public void addIncidentBag(Bag bag) { + incidentBags.add(bag); + } + + public void removeVertex(int v) { + if (vertexSet.get(v)) { + size--; + } + vertexSet.clear(v); + } + + public void invert() { + vertexSet = convert(vertexSet, parent.inv); + parent = parent.parent; + } + + public void convert() { + vertexSet = convert(vertexSet, parent.conv); + } + + private VertexSet convert(VertexSet s, int[] conv) { + VertexSet result = new VertexSet(); + for (int v = s.nextSetBit(0); v >= 0; + v = s.nextSetBit(v + 1)) { + result.set(conv[v]); + } + return result; + } + + void collectBagsToPack(ArrayList list, Bag from) { + for (Bag bag: incidentBags) { + if (bag !=from) { + bag.collectBagsToPack(list, this); + } + } + } + + public void figureOutSafetyBySPT() { + if (!safe && !unsafe) { + safe = isSafe(); + unsafe = !safe; + } + } + + public boolean isSafe() { + SafeSeparator ss = new SafeSeparator(parent.graph); + //return isSafeBySPT(); + safeSteps = ss.decideSafeness(vertexSet); + return safeSteps > 0; + //return ss.isSafeSeparator(vertexSet); + } + + public int getSteps(){ + return Math.abs(safeSteps); + } + + public boolean isSafeBySPT() { + parentVertex = new int[graph.n]; + ArrayList components = + graph.getComponents(vertexSet); + for (VertexSet compo: components) { + if (!isSafeComponentBySPT(compo)) { + return false; + } + } + return true; + } + + private boolean isSafeComponentBySPT(VertexSet component) { + VertexSet neighborSet = graph.neighborSet(component); + VertexSet rest = graph.all.subtract(neighborSet).subtract(component); + + for (int v = neighborSet.nextSetBit(0); v >= 0; + v = neighborSet.nextSetBit(v + 1)) { + VertexSet missing = neighborSet.subtract(graph.neighborSet[v]); + + for (int w = missing.nextSetBit(0); w >= 0 && w <= v; + w = missing.nextSetBit(w + 1)) { + missing.clear(w); + } + + if (!missing.isEmpty()) { + VertexSet spt = shortestPathTree(v, missing, rest); + if (spt == null) { + return false; + } + rest.andNot(spt); + } + } + return true; + } + + private VertexSet shortestPathTree(int v, VertexSet targets, + VertexSet available) { + VertexSet union = available.unionWith(targets); + + VertexSet reached = new VertexSet(graph.n); + reached.set(v); + VertexSet leaves = (VertexSet) reached.clone(); + while (!targets.isSubset(reached) && !leaves.isEmpty()) { + VertexSet newLeaves = new VertexSet(graph.n); + for (int u = leaves.nextSetBit(0); u >= 0; + u = leaves.nextSetBit(u + 1)) { + VertexSet children = + graph.neighborSet[u].intersectWith(union).subtract(reached); + for (int w = children.nextSetBit(0); w >= 0; + w = children.nextSetBit(w + 1)) { + reached.set(w); + parentVertex[w] = u; + if (available.get(w)) { + newLeaves.set(w); + } + } + } + leaves = newLeaves; + } + + if (!targets.isSubset(reached)) { + return null; + } + + VertexSet spt = new VertexSet(graph.n); + for (int u = targets.nextSetBit(0); u >= 0; + u = targets.nextSetBit(u + 1)) { + int w = parentVertex[u]; + while (w != v) { + spt.set(w); + w = parentVertex[w]; + } + } + return spt; + } + + + public void dump(String indent) { + System.out.println(indent + "sep:" + toString()); + } + + public String toString() { + StringBuilder sb = new StringBuilder(); + sb.append(vertexSet); + sb.append("("); + for (Bag bag: incidentBags){ + if (bag == null) { + sb.append("null bag "); + } + else { + sb.append(parent.nestedBags.indexOf(bag) + ":" + bag.vertexSet); + sb.append(" "); + } + } + sb.append(")"); + + return sb.toString(); + } + + @Override + public Separator clone(){ + try{ + Separator result = (Separator)super.clone(); + + if(vertexSet != null){ + result.vertexSet = (VertexSet)vertexSet.clone(); + } + if(parentVertex != null){ + result.parentVertex = Arrays.copyOf(parentVertex, parentVertex.length); + } + if(incidentBags != null){ + result.incidentBags = new ArrayList< >(incidentBags); + } + + return result; + } + catch(CloneNotSupportedException cnse){ + throw new AssertionError(); + } + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/TreeDecomposition.java b/solvers/TCS-Meiji/tw/heuristic/TreeDecomposition.java new file mode 100644 index 0000000..1ee3d09 --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/TreeDecomposition.java @@ -0,0 +1,949 @@ +/* + * Copyright (c) 2016, Hisao Tamaki and Hiromu Ohtsuka + */ +package tw.heuristic; + +import java.io.BufferedReader; + +import java.io.File; +import java.io.FileNotFoundException; +import java.io.FileOutputStream; +import java.io.FileReader; +import java.io.IOException; +import java.io.PrintStream; +import java.util.ArrayList; +import java.util.Arrays; + +/** + * This class provides a representation of tree-decompositions of graphs. + * It is based on the {@code Graph} class for the representation of graphs. + * Members representing the bags and tree edges are all public. + * Reading from and writing to files, in the .td format of PACE challeng, + * are provided. + * + * @author Hisao Tamaki + */ + +public class TreeDecomposition { + /** + * number of bags + */ + public int nb; + + /** + * intended width of this decomposition + */ + public int width; + + /** + * the graph decomposed + */ + public Graph g; + + /** + * array of bags, each of which is an int array listing vertices. + * The length of this array is {@code nb + 1} as the bag number (index)[ + * starts from 1 + */ + public int[][] bags; + + /** + * array of bags, each of which is an {@code VertexSet} representing + * the set of vertices in the bag. + * The length of this array is {@code nb + 1} as the bag number (index)[ + * starts from 1. + */ + public VertexSet[] bagSets; + + /** + * array of node degrees. {@code degree[i]} is the number of bags adjacent + * to the ith bag. + * The length of this array is {@code nb + 1} as the bag number (index)[ + * starts from 1. + */ + public int degree[]; + + /** + * array of int arrays representing neighbor lists. + * {@code neighbor[i][j]} is the bag index (in {@bags} array) of + * the jth bag adjacent to the ith bag. + * The length of this array is {@code nb + 1} as the bag number (index)[ + * starts from 1. + */ + public int neighbor[][]; + + private static boolean debug = false; + + /** + * Construct a tree decomposition with the specified number of bags, + * intended width, and the graph decomposed. + * @param nb the number of bags + * @param width the intended width + * @param g the graph decomposed + */ + public TreeDecomposition(int nb, int width, Graph g) { + this.nb = nb; + this.width = width; + this.g = g; + bags = new int[nb + 1][]; + degree = new int[nb + 1]; + neighbor = new int[nb + 1][]; + } + + /** + * Sets the ith bag to the given bag. + * @param i the index of the bag. 1 <= i <= nb must hold + * @param bag int array representing the bag + */ + public void setBag(int i, int[] bag) { + bags[i] = bag; + } + + /** + * Adds the given bag. The number of bags {@code nb} is incremented. + * @param bag int array representing the bag to be added + */ + public int addBag(int[] bag) { + nb++; + if (debug) { + System.out.print(nb + "th bag:"); + } + for (int i = 0; i < bag.length; i++) { + if (debug) { + System.out.print(" " + bag[i]); + } + } + if (debug) { + System.out.println(); + } + bags = Arrays.copyOf(bags, nb + 1); + bags[nb] = bag; + degree = Arrays.copyOf(degree, nb + 1); + neighbor = Arrays.copyOf(neighbor, nb + 1); + if (bagSets != null) { + bagSets = Arrays.copyOf(bagSets, nb + 1); + bagSets[nb] = new VertexSet(bag); + } + return nb; + } + + /** + * Adds and edge + * the neighbor lists of both bags, as well as the degrees, + * are updated + * @param i index of one bag of the edge + * @param j index of the other bag of the edge + */ + public void addEdge(int i, int j) { + if (debug) { + System.out.println("add deomposition edge (" + i + "," + j + ")"); + } + addHalfEdge(i, j); + addHalfEdge(j, i); + } + + /** + * Adds a bag to the neibhor list of another bag + * @param i index of the bag of which the neighbor list is updated + * @param j index of the bag to be added to {@code neighbor[i]} + */ + private void addHalfEdge(int i, int j) { + if (neighbor[i] == null) { + degree[i] = 1; + neighbor[i] = new int[]{j}; + } + else if (indexOf(j, neighbor[i]) < 0){ + degree[i]++; + neighbor[i] = Arrays.copyOf(neighbor[i], degree[i]); + neighbor[i][degree[i] - 1] = j; + } + } + + /** + * Combine the given tree-decomposition into this target tree-decomposition. + * The following situation is assumed. Let G be the graph for which this + * target tree-decomposition is being constructed. Currently, + * this tree-decomposition contains bags for some subgraph of G. + * The tree-decomposition of some other part of G is given by the argument. + * The numbering of the vertices in the argument tree-decomposition differs + * from that in G and the conversion map is provided by another argument. + * @param td tree-decomposition to be combined + * @param conv the conversion map, that maps the vertex number in the graph of + * tree-decomposition {@code td} into the vertex number of the graph of this + * target tree-decomposition. + */ + public void combineWith(TreeDecomposition td, int conv[]) { + this.width = Math.max(this.width, td.width); + int nb0 = nb; + for (int i = 1; i <= td.nb; i++) { + addBag(convertBag(td.bags[i], conv)); + } + for (int i = 1; i <= td.nb; i++) { + for (int j = 0; j < td.degree[i]; j++) { + int h = td.neighbor[i][j]; + addHalfEdge(nb0 + i, nb0 + h); + } + } + } + /** + * Combine the given tree-decomposition into this target tree-decomposition. + * The assumptions are the same as in the method with two parameters. + * The third parameter specifies the way in which the two parts + * of the decompositions are connected by a tree edge of the decomposition. + * + * @param td tree-decomposition to be combined + * @param conv the conversion map, that maps the vertex number in the graph of + * tree-decomposition {@code td} into the vertex number of the graph of this + * target tree-decomposition. + * @param v int array listing vertices: an existing bag containing all of + * these vertices and a bag in the combined part containing all of + * these vertices are connected by a tree edge; if {@code v} is null + * then first bags of the two parts are connected + */ + public void combineWith(TreeDecomposition td, int conv[], int v[]) { + this.width = Math.max(this.width, td.width); + int nb0 = nb; + for (int i = 1; i <= td.nb; i++) { + addBag(convertBag(td.bags[i], conv)); + } + for (int i = 1; i <= td.nb; i++) { + for (int j = 0; j < td.degree[i]; j++) { + int h = td.neighbor[i][j]; + addEdge(nb0 + i, nb0 + h); + } + } + if (v == null) { + addEdge(1, nb0 + 1); + } + else { + int k = findBagWith(v, 1, nb0); + int h = findBagWith(v, nb0 + 1, nb); + if (k < 0) { + System.out.println(Arrays.toString(v) + " not found in the first " + nb0 + " bags"); + } + if (h < 0) { + System.out.println(Arrays.toString(v) + " not found in the last " + td.nb + " bags"); + } + addEdge(k, h); + } + } + + /** + * Converts the vetex number in the bag + * @param bag input bag + * @param conv conversion map of the vertices + * @return the bag resulting from the conversion, + * containing {@code conv[v]} for each v in the original bag + */ + + private int[] convertBag(int bag[], int conv[]) { + int[] result = new int[bag.length]; + for (int i = 0; i < bag.length; i++) { + result[i] = conv[bag[i]]; + } + return result; + } + + /** + * Find a bag containing all the listed vertices, + * with bag index in the specified range + * @param v int array listing vertices + * @param s the starting bag index + * @param t the ending bag index + * @return index of the bag containing all the + * vertices listed in {@code v}; -1 if none of the + * bags {@code bags[i]}, s <= i <= t, satisfies this + * condition. + */ + private int findBagWith(int v[], int s, int t) { + for (int i = s; i <= t; i++) { + boolean all = true; + for (int j = 0; j < v.length; j++) { + if (indexOf(v[j], bags[i]) < 0) { + all = false; + } + } + if (all) return i; + } + return -1; + } + + /** + * write this tree decomposition to the given print stream + * in the PACE .td format + * @param ps print stream + */ + public void writeTo(PrintStream ps) { + StringBuilder sb = new StringBuilder(); + //ps.println("s td " + nb + " " + (width + 1) + " " + g.n); + sb.append("s td " + nb + " " + (width + 1) + " " + g.n + "\n"); + for (int i = 1; i <= nb; i++) { + //ps.print("b " + i); + sb.append("b " + i); + for (int j = 0; j < bags[i].length; j++) { + //ps.print(" " + (bags[i][j] + 1)); + sb.append(" " + (bags[i][j] + 1)); + } + sb.append("\n"); + //ps.println(); + } + for (int i = 1; i <= nb; i++) { + for (int j = 0; j < degree[i]; j++) { + int h = neighbor[i][j]; + if (i < h) { + //ps.println(i + " " + h); + sb.append(i + " " + h + "\n"); + } + } + } + ps.print(sb.toString()); + ps.flush(); + } + + /** + * validates this target tree-decomposition + * checking the three required conditions + * The validation result is printed to the + * standard output + */ + public void validate() { + System.out.println("validating nb = " + nb + ", ne = " + numberOfEdges()); + boolean error = false; + if (!isConnected()) { + System.out.println("is not connected "); + error = true; + } + if (isCyclic()) { + System.out.println("has a cycle "); + error = true; + } + if (tooLargeBag()) { + System.out.println("too Large bag "); + error = true; + } + int v = missinVertex(); + if (v >= 0) { + System.out.println("a vertex " + v + " missing "); + error = true; + } + int edge[] = missingEdge(); + if (edge != null) { + System.out.println("an edge " + Arrays.toString(edge) + " is missing "); + error = true; + } + if (violatesConnectivity()) { + System.out.println("connectivety property is violated "); + error = true; + } + if (!error) { + System.out.println("validation ok"); + } + } + + public boolean isValid(PrintStream ps) { + ps.println("validating nb = " + nb + ", ne = " + numberOfEdges()); + boolean error = false; + if (!isConnected()) { + ps.println("is not connected "); + error = true; + } + if (isCyclic()) { + ps.println("has a cycle "); + error = true; + } + if (tooLargeBag()) { + ps.println("too Large bag "); + error = true; + } + int v = missinVertex(); + if (v >= 0) { + ps.println("a vertex " + v + " missing "); + error = true; + } + int edge[] = missingEdge(); + if (edge != null) { + ps.println("an edge " + Arrays.toString(edge) + " is missing "); + error = true; + } + if (violatesConnectivity()) { + ps.println("connectivety property is violated "); + error = true; + } + if (!error) { + ps.println("validation ok"); + } + return !error; + } + + /** + * Checks if this tree-decomposition is connected as + * a graph of bags + * @return {@code true} if this tree-decomposition is connected + * {@cdoe false} otherwise + */ + + private boolean isConnected() { + boolean mark[] = new boolean [nb + 1]; + depthFirst(1, mark); + for (int i = 1; i <= nb; i++) { + if (!mark[i]) { + return false; + } + } + return true; + } + + private void depthFirst(int i, boolean mark[]) { + mark[i] = true; + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + if (!mark[j]) { + depthFirst(j, mark); + } + } + } + + /** + * Checks if this tree-decomposition is acyclic as + * a graph of bags + * @return {@code true} if this tree-decomposition is acyclic + * {@cdoe false} otherwise + */ + + private boolean isCyclic() { + boolean mark[] = new boolean [nb + 1]; + return isCyclic(1, mark, 0); + } + + private boolean isCyclic(int i, boolean mark[], + int parent) { + mark[i] = true; + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + if (j == parent) { + continue; + } + if (mark[j]) { + return true; + } + else { + boolean b = isCyclic(j, mark, i); + if (b) return true; + } + } + return false; + } + + /** + * Checks if the bag size is within the declared + * tree-width plus one + * @return {@code true} if there is some violating bag, + * {@cdoe false} otherwise + */ + private boolean tooLargeBag() { + for (int i = 1; i <= nb; i++) { + if (bags[i].length > width + 1) { + return true; + } + } + return false; + } + + /** + * Finds a vertex of the graph that does not appear + * in any of the bags + * @return the missing vertex number; -1 if there is no + * missing vertex + */ + private int missinVertex() { + for (int i = 0; i < g.n; i++) { + if (!appears(i)) { + return i; + } + } + return -1; + } + + /** + * Checks if the given vertex appears in some bag + * of this target tree-decomposition + * @param v vertex number + * @return {@cod true} if vertex {@code v} appears in + * some bag + */ + private boolean appears(int v) { + for (int i = 1; i <= nb; i++) { + if (indexOf(v, bags[i]) >= 0) { + return true; + } + } + return false; + } + + /** + * Checks if there is some edge not appearing in any + * bag of this target tree-decomposition + * @return two element int array representing the + * missing edge; null if there is no missing edge + */ + private int[] missingEdge() { + for (int i = 0; i < g.n; i++) { + for (int j = 0; j < g.degree[i]; j++) { + int h = g.neighbor[i][j]; + if (!appears(i, h)) { + return new int[]{i, h}; + } + } + } + return null; + } + + /** + * Checks if the edge between the two specified vertices + * appear in some bag of this target tree-decomposition + * @param u one endvertex of the edge + * @param v the other endvertex of the edge + * @return {@code true} if this edge appears in some bag; + * {@code false} otherwise + */ + private boolean appears(int u, int v) { + for (int i = 1; i <= nb; i++) { + if (indexOf(u, bags[i]) >= 0 && + indexOf(v, bags[i]) >= 0) { + return true; + } + } + return false; + } + + /** + * Checks if this target tree-decomposition violates + * the connectivity condition for some vertex of the graph + * @return {@code true} if the condition is violated + * for some vertex; {@code false} otherwise. + */ + private boolean violatesConnectivity() { + for (int v = 1; v <= g.n; v++) { + if (violatesConnectivity(v)) { + return true; + } + } + return false; + } + + /** + * Checks if this target tree-decomposition violates + * the connectivity condition for the given vertex {@code v} + * @param v vertex number + * @return {@code true} it the connectivity condition is violated + * for vertex {@code v} + */ + private boolean violatesConnectivity(int v) { + boolean mark[] = new boolean[nb + 1]; + + for (int i = 1; i <= nb; i++) { + if (indexOf(v, bags[i]) >= 0) { + markFrom(i, v, mark); + } + } + + for (int i = 1; i <= nb; i++) { + if (!mark[i] && indexOf(v, bags[i]) >= 0) { + return true; + } + } + return false; + } + + /** + * Mark the tree nodes (bags) containing the given vertex + * that are reachable from the bag numbered {@code i}, + * without going through the nodes already marked + * @param i bag number + * @param v vertex number + * @param mark boolean array recording the marks: + * {@code mark[v]} represents if vertex {@code v} is marked + */ + private void markFrom(int i, int v, boolean mark[]) { + if (mark[i]) { + return; + } + mark[i] = true; + + for (int j = 0; j < degree[i]; j++) { + int h = neighbor[i][j]; + if (indexOf(v, bags[h]) >= 0) { + markFrom(h, v, mark); + } + } + } + + /** + * Simplify this target tree-decomposition by + * forcing the intersection between each pair of + * adjacent bags to be a minimal separator + */ + + public void minimalize() { + if (bagSets == null) { + bagSets = new VertexSet[nb + 1]; + for (int i = 1; i <= nb; i++) { + bagSets[i] = new VertexSet(bags[i]); + } + } + for (int i = 1; i <= nb; i++) { + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + VertexSet separator = bagSets[i].intersectWith(bagSets[j]); + VertexSet iSide = new VertexSet(g.n); + collectVertices(i, j, iSide); + iSide.andNot(separator); + VertexSet neighbors = g.neighborSet(iSide); + VertexSet delta = separator.subtract(neighbors); + bagSets[i].andNot(delta); + } + } + for (int i = 1; i <= nb; i++) { + bags[i] = bagSets[i].toArray(); + } + } + + /** + * Collect vertices in the bags in the specified + * subtree of this target tree-decomposition + * @param i the bag index of the root of the subtree + * @param exclude the index in the adjacency list + * the specified bag, to be excluded from the subtree + * @param set the {@VertexSet} in which to collect the + * vertices + */ + private void collectVertices(int i, int exclude, VertexSet set) { + set.or(bagSets[i]); + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + if (j != exclude) { + collectVertices(j, i, set); + } + } + } + + /** + * Canonicalize this target tree-decomposition by + * forcing every bag to be a potential maximal clique. + * A naive implementation with no efficiency considerations. + */ + + public void canonicalize() { + if (bagSets == null) { + bagSets = new VertexSet[nb + 1]; + for (int i = 1; i <= nb; i++) { + bagSets[i] = new VertexSet(bags[i]); + } + } + boolean moving = true; + while (moving) { + moving = false; + int i = 1; + while (i <= nb) { + if (trySplit(i)) { + moving = true; + } + i++; + } + } + } + + private boolean trySplit(int i) { + VertexSet neighborSets[] = new VertexSet[g.n]; + VertexSet b = bagSets[i]; + ArrayList components = g.getComponents(b); + VertexSet seps[] = new VertexSet[components.size()]; + for (int j = 0; j < seps.length; j++) { + seps[j] = g.neighborSet(components.get(j)).intersectWith(b); + } + + for (int v = b.nextSetBit(0); v >= 0; + v = b.nextSetBit(v + 1)) { + VertexSet ns = g.neighborSet[v].intersectWith(b); + for (VertexSet sep: seps) { + if (sep.get(v)) { + ns.or(sep); + } + } + ns.clear(v); + neighborSets[v] = ns.intersectWith(b); + } + + for (int v = b.nextSetBit(0); v >= 0; + v = b.nextSetBit(v + 1)) { + VertexSet left = neighborSets[v]; + left.set(v); + VertexSet right = b.subtract(left); + if (right.isEmpty()) { + continue; + } + VertexSet separator = new VertexSet(g.n); + for (int w = right.nextSetBit(0); w >= 0; + w = right.nextSetBit(w + 1)) { + separator.or(neighborSets[w]); + } + right.or(separator); + + int j = addBag(right.toArray()); + + bags[i] = left.toArray(); + bagSets[i] = left; + + int ni = 0; + int nj = 0; + neighbor[j] = new int[degree[i]]; + for (int k = 0; k < degree[i]; k++) { + int h = neighbor[i][k]; + if (bagSets[h].intersects(left)) { + neighbor[i][ni++] = h; + } + else { + neighbor[j][nj++] = h; + } + } + degree[i] = ni; + degree[j] = nj; + neighbor[i] = Arrays.copyOf(neighbor[i], ni); + neighbor[j] = Arrays.copyOf(neighbor[j], nj); + + addEdge(i, j); + + for (int k = 0; k < nj; k++) { + int h = neighbor[j][k]; + for (int l = 0; l < degree[h]; l++) { + if (neighbor[h][l] == i) { + neighbor[h][l] = j; + } + } + } + return true; + } + return false; + } + + /** + * Tests if the target tree-decomposition is canonical, + * i.e., consists of potential maximal cliques. + */ + + public boolean isCanonical() { + for (int i = 1; i <= nb; i++) { + if (!isCanonicalBag(new VertexSet(bags[i]))) { + return false; + } + } + return true; + } + + private boolean isCanonicalBag(VertexSet b) { + ArrayList components = g.getComponents(b); + + for (int v = b.nextSetBit(0); v >= 0; + v = b.nextSetBit(v + 1)) { + for (int w = b.nextSetBit(v + 1); w >= 0; + w = b.nextSetBit(w + 1)) { + if (g.neighborSet[v].get(w)) { + continue; + } + boolean covered = false; + for (VertexSet compo: components) { + VertexSet ns = g.neighborSet(compo); + if (ns.get(v) && ns.get(w)) { + covered = true; + break; + } + } + if (!covered) { + return false; + } + } + } + return true; + } + + public void analyze(int rootIndex) { + if (bagSets == null) { + bagSets = new VertexSet[nb + 1]; + for (int i = 1; i <= nb; i++) { + bagSets[i] = new VertexSet(bags[i]); + } + } + + analyze(rootIndex, -1); + } + + private void analyze(int i, int exclude) { + System.out.println(i + ": " + bagSets[i]); + VertexSet separator = bagSets[i]; + VertexSet set[] = new VertexSet[degree[i]]; + + ArrayList components = g.getComponents(separator); + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + set[a] = new VertexSet(g.n); + collectVertices(j, i, set[a]); + } + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + if (j != exclude) { + System.out.println(" subtree at " + j); + for (VertexSet compo: components) { + if (compo.isSubset(set[a])) { + System.out.println(" contains " + compo); + } + else if (compo.intersects(set[a])) { + System.out.println(" intersects " + compo); + System.out.println(" but missing " + + compo.subtract(set[a])); + } + } + } + } + for (VertexSet compo: components) { + boolean intersecting = false; + for (int a = 0; a < degree[i]; a++) { + if (compo.intersects(set[a])) { + intersecting = true; + } + } + if (!intersecting) { + System.out.println(" component totally missing: " + + compo); + } + } + for (int a = 0; a < degree[i]; a++) { + int j = neighbor[i][a]; + if (j != exclude) { + analyze(j, i); + } + } + } + + /** + * Computes the number of tree edges of this tree-decomosition, + * which is the sum of the node degrees divides by 2 + * @return the number of edges + */ + private int numberOfEdges() { + int count = 0; + for (int i = 1; i <= nb; i++) { + count += degree[i]; + } + return count / 2; + } + /** + * Finds the index at which the given element + * is found in the given array. + * @param x int value to be searched + * @param a int array in which to find {@code x} + * @return {@code i} such that {@code a[i] == x}; + * -1 if none such index exists + */ + + private int indexOf(int x, int a[]) { + return indexOf(x, a, a.length); + } + + /** + * Finds the index at which the given element + * is found in the given array. + * @param x int value to be searched + * @param a int array in which to find {@code x} + * @param n the number of elements to be searched + * in the array + * @return {@code i} such that {@code a[i] == x} and + * 0 <= i <= n; -1 if none such index exists + */ + private int indexOf(int x, int a[], int n) { + for (int i = 0; i < n; i++) { + if (x == a[i]) { + return i; + } + } + return -1; + } + + /** + * Reads the tree-decomposition for a given graph from + * a file at a given path and with a given name, in the + * PACE .gr format; the extension .gr is added to the name. + * @param path path at which the file is found + * @param name file name, without the extension + * @param g graph + * @return the tree-decomposition read + */ + public static TreeDecomposition readDecomposition(String path, String name, Graph g) { + File file = new File(path + "/" + name + ".td"); + try { + BufferedReader br = new BufferedReader(new FileReader(file)); + String line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + if (line.startsWith("s")) { + String s[] = line.split(" "); + if (!s[1].equals("td")) { + throw new RuntimeException("!!Not treewidth solution " + line); + } + int nb = Integer.parseInt(s[2]); + int width = Integer.parseInt(s[3]) - 1; + int n = Integer.parseInt(s[4]); + + System.out.println("nb = " + nb + ", width = " + width + ", n = " + n); + TreeDecomposition td = new TreeDecomposition(0, width, g); + + for (int i = 0; i < nb; i++) { + line = br.readLine(); + while (line.startsWith("c")) { + line = br.readLine(); + } + s = line.split(" "); + + if (!s[0].equals("b")) { + throw new RuntimeException("!!line starting with 'b' expected"); + } + + if (!s[1].equals(Integer.toString(i + 1))) { + throw new RuntimeException("!!Bag number " + (i + 1) + " expected"); + } + + int bag[] = new int[s.length - 2]; + for (int j = 0; j < bag.length; j++) { + bag[j] = Integer.parseInt(s[j + 2]) - 1; + } + td.addBag(bag); + } + + while (true) { + line = br.readLine(); + while (line != null && line.startsWith("c")) { + line = br.readLine(); + } + if (line == null) { + break; + } + + s = line.split(" "); + + int j = Integer.parseInt(s[0]); + int k = Integer.parseInt(s[1]); + + td.addEdge(j, k); + td.addEdge(k, j); + } + + return td; + } + } catch (FileNotFoundException e) { + e.printStackTrace(); + } catch (IOException e) { + e.printStackTrace(); + } + return null; + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/Unsigned.java b/solvers/TCS-Meiji/tw/heuristic/Unsigned.java new file mode 100644 index 0000000..8c3b20b --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/Unsigned.java @@ -0,0 +1,177 @@ +/* + * Copyright (c) 2017, Hiromu Otsuka +*/ + +package tw.heuristic; + +public class Unsigned{ + private Unsigned(){} + + public static final long ALL_ONE_BIT = 0xFFFFFFFFFFFFFFFFL; + + public static long consecutiveOneBit(int i, int j){ + return (ALL_ONE_BIT >>> (64 - j)) & (ALL_ONE_BIT << i); + } + + public static byte byteValue(long value){ + return (byte)value; + } + + public static short shortValue(long value){ + return (short)value; + } + + public static int intValue(long value){ + return (int)value; + } + + public static int toUnsignedInt(byte b){ + return Byte.toUnsignedInt(b); + } + + public static int toUnsignedInt(short s){ + return Short.toUnsignedInt(s); + } + + public static long toUnsignedLong(byte b){ + return Byte.toUnsignedLong(b); + } + + public static long toUnsignedLong(short s){ + return Short.toUnsignedLong(s); + } + + public static long toUnsignedLong(int i){ + return Integer.toUnsignedLong(i); + } + + public static int binarySearch(byte[] a, byte key){ + return binarySearch(a, 0, a.length, key); + } + + public static int binarySearch(short[] a, short key){ + return binarySearch(a, 0, a.length, key); + } + + public static int binarySearch(int[] a, int key){ + return binarySearch(a, 0, a.length, key); + } + + public static int binarySearch(long[] a, long key){ + return binarySearch(a, 0, a.length, key); + } + + public static int compare(byte a, byte b){ + return Integer.compareUnsigned( + toUnsignedInt(a), toUnsignedInt(b)); + } + + public static int compare(short a, short b){ + return Integer.compareUnsigned( + toUnsignedInt(a), toUnsignedInt(b)); + } + + public static int compare(int a, int b){ + return Integer.compareUnsigned(a, b); + } + + public static int compare(long a, long b){ + return Long.compareUnsigned(a, b); + } + + public static int binarySearch(byte[] a, + int fromIndex, int toIndex, byte key){ + int low = fromIndex; + int high = toIndex - 1; + + while(low <= high) { + int mid = (low + high) >>> 1; + byte midVal = a[mid]; + int cmp = compare(midVal, key); + + if(cmp < 0){ + low = mid + 1; + } + else if(cmp > 0){ + high = mid - 1; + } + else{ + return mid; + } + } + + return -(low + 1); + } + + public static int binarySearch(short[] a, + int fromIndex, int toIndex, short key){ + int low = fromIndex; + int high = toIndex - 1; + + while(low <= high) { + int mid = (low + high) >>> 1; + short midVal = a[mid]; + int cmp = compare(midVal, key); + + if(cmp < 0){ + low = mid + 1; + } + else if(cmp > 0){ + high = mid - 1; + } + else{ + return mid; + } + } + + return -(low + 1); + } + + public static int binarySearch(int[] a, + int fromIndex, int toIndex, int key){ + int low = fromIndex; + int high = toIndex - 1; + + while(low <= high) { + int mid = (low + high) >>> 1; + int midVal = a[mid]; + int cmp = compare(midVal, key); + + if(cmp < 0){ + low = mid + 1; + } + else if(cmp > 0){ + high = mid - 1; + } + else{ + return mid; + } + } + + return -(low + 1); + } + + public static int binarySearch(long[] a, + int fromIndex, int toIndex, long key){ + int low = fromIndex; + int high = toIndex - 1; + + while(low <= high) { + int mid = (low + high) >>> 1; + long midVal = a[mid]; + int cmp = compare(midVal, key); + + if(cmp < 0){ + low = mid + 1; + } + else if(cmp > 0){ + high = mid - 1; + } + else{ + return mid; + } + } + + return -(low + 1); + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/VertexSet.java b/solvers/TCS-Meiji/tw/heuristic/VertexSet.java new file mode 100644 index 0000000..2ff5c05 --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/VertexSet.java @@ -0,0 +1,606 @@ +/* + * Copyright (c) 2017, Hiromu Ohtsuka +*/ + +package tw.heuristic; + +public class VertexSet +implements Comparable< VertexSet >, Cloneable{ + private int TH1 = 256; + public static enum Type{ + ARRAYSET, XBITSET + }; + private XBitSet xbitset; + private ArraySet arrayset; + private Type type = Type.ARRAYSET; + + public VertexSet(){ + arrayset = new ArraySet(); + } + + public VertexSet(int n){ + this(); + TH1 = n / 100; + } + + public VertexSet(int n, int[] a){ + TH1 = n / 100; + if(a.length <= TH1){ + arrayset = new ArraySet(a); + } + else{ + type = Type.XBITSET; + xbitset = new XBitSet(a); + } + } + + public VertexSet(int[] a){ + if(a.length <= TH1){ + arrayset = new ArraySet(a); + } + else{ + type = Type.XBITSET; + xbitset = new XBitSet(a); + } + } + + private VertexSet(ArraySet as){ + arrayset = as; + ensureType(); + } + + private VertexSet(XBitSet xbs){ + xbitset = xbs; + type = Type.XBITSET; + ensureType(); + } + + private void toArraySet(){ + if(type == Type.ARRAYSET){ + return; + } + arrayset = arraySetOf(xbitset); + type = Type.ARRAYSET; + xbitset = null; + } + + private void toXBitSet(){ + if(type == Type.XBITSET){ + return; + } + xbitset = xBitSetOf(arrayset); + type = Type.XBITSET; + arrayset = null; + } + + private static XBitSet xBitSetOf(ArraySet as){ + return new XBitSet(as.toArray()); + } + + private static ArraySet arraySetOf(XBitSet xbs){ + return new ArraySet(xbs.toArray()); + } + + private void ensureType(){ + int size = (type == Type.ARRAYSET) ? + arrayset.cardinality() : xbitset.cardinality(); + if(size <= TH1){ + toArraySet(); + } + else{ + toXBitSet(); + } + } + + public void and(VertexSet set){ + if(type == Type.ARRAYSET && set.type == Type.ARRAYSET){ + arrayset.and(set.arrayset); + ensureType(); + return; + } + if(type == Type.XBITSET && set.type == Type.XBITSET){ + xbitset.and(set.xbitset); + ensureType(); + return; + } + if(type == Type.ARRAYSET){ + toXBitSet(); + } + else{ + set.toXBitSet(); + } + xbitset.and(set.xbitset); + set.ensureType(); + ensureType(); + } + + public void andNot(VertexSet set){ + if(type == Type.ARRAYSET && set.type == Type.ARRAYSET){ + arrayset.andNot(set.arrayset); + ensureType(); + return; + } + if(type == Type.XBITSET && set.type == Type.XBITSET){ + xbitset.andNot(set.xbitset); + ensureType(); + return; + } + if(type == Type.ARRAYSET){ + toXBitSet(); + } + else{ + set.toXBitSet(); + } + xbitset.andNot(set.xbitset); + set.ensureType(); + ensureType(); + } + + public int cardinality(){ + if(type == Type.ARRAYSET){ + return arrayset.cardinality(); + } + else{ + return xbitset.cardinality(); + } + } + + public void clear(){ + if(type == Type.ARRAYSET){ + arrayset.clear(); + } + else{ + type = Type.ARRAYSET; + xbitset = null; + arrayset = new ArraySet(); + } + } + + public void clear(int i){ + if(type == Type.ARRAYSET){ + arrayset.clear(i); + } + else{ + xbitset.clear(i); + } + ensureType(); + } + + public void clear(int fromIndex, int toIndex){ + if(type == Type.ARRAYSET){ + arrayset.clear(fromIndex, toIndex); + } + else{ + xbitset.clear(fromIndex, toIndex); + } + ensureType(); + } + + @Override + public VertexSet clone(){ + try{ + VertexSet result = (VertexSet)super.clone(); + if(type == Type.ARRAYSET){ + result.arrayset = (ArraySet)arrayset.clone(); + } + else{ + result.xbitset = (XBitSet)xbitset.clone(); + } + return result; + } + catch(CloneNotSupportedException e){ + throw new AssertionError(); + } + } + + @Override + public boolean equals(Object obj){ + if(!(obj instanceof VertexSet)){ + return false; + } + VertexSet vs = (VertexSet)obj; + if(type == Type.ARRAYSET && vs.type == Type.ARRAYSET){ + return arrayset.equals(vs.arrayset); + } + if(type == Type.XBITSET && vs.type == Type.XBITSET){ + return xbitset.equals(vs.xbitset); + } + if(type == Type.ARRAYSET){ + return xBitSetOf(arrayset).equals(vs.xbitset); + } + else{ + return xbitset.equals(xBitSetOf(vs.arrayset)); + } + } + + public void flip(int i){ + if(type == Type.ARRAYSET){ + arrayset.flip(i); + } + else{ + xbitset.flip(i); + } + ensureType(); + } + + public void flip(int fromIndex, int toIndex){ + if(type == Type.ARRAYSET){ + arrayset.flip(fromIndex, toIndex); + } + else{ + xbitset.flip(fromIndex, toIndex); + } + ensureType(); + } + + public boolean get(int i){ + if(type == Type.ARRAYSET){ + return arrayset.get(i); + } + else{ + return xbitset.get(i); + } + } + + public VertexSet get(int fromIndex, int toIndex){ + throw new UnsupportedOperationException(); + } + + @Override + public int hashCode(){ + int hash = 1; + if(type == Type.ARRAYSET){ + for(int i = 0; i < arrayset.size; i++){ + hash = 31 * hash + arrayset.a[i]; + } + } + else{ + for(int i = xbitset.nextSetBit(0); + i >= 0; i = xbitset.nextSetBit(i + 1)){ + hash = 31 * hash + i; + } + } + return hash; + } + + public boolean hasSmaller(VertexSet set){ + if(type == Type.ARRAYSET && set.type == Type.ARRAYSET){ + return arrayset.hasSmaller(set.arrayset); + } + if(type == Type.XBITSET && set.type == Type.XBITSET){ + return xbitset.hasSmaller(set.xbitset); + } + if(type == Type.ARRAYSET){ + return xBitSetOf(arrayset).hasSmaller(set.xbitset); + } + else{ + return xbitset.hasSmaller(xBitSetOf(set.arrayset)); + } + } + + public boolean hasSmallerVertexThan(VertexSet set){ + if(type == Type.ARRAYSET && set.type == Type.ARRAYSET){ + return arrayset.hasSmallerVertexThan(set.arrayset); + } + if(type == Type.XBITSET && set.type == Type.XBITSET){ + return xbitset.hasSmallerVertexThan(set.xbitset); + } + if(type == Type.ARRAYSET){ + return xBitSetOf(arrayset).hasSmallerVertexThan(set.xbitset); + } + else{ + return xbitset.hasSmallerVertexThan(xBitSetOf(set.arrayset)); + } + } + + public boolean intersects(VertexSet set){ + if(type == Type.ARRAYSET && set.type == Type.ARRAYSET){ + return arrayset.intersects(set.arrayset); + } + if(type == Type.XBITSET && set.type == Type.XBITSET){ + return xbitset.intersects(set.xbitset); + } + if(type == Type.ARRAYSET){ + return xBitSetOf(arrayset).intersects(set.xbitset); + } + else{ + return xbitset.intersects(xBitSetOf(set.arrayset)); + } + } + + public VertexSet intersectWith(VertexSet set){ + if(type == Type.ARRAYSET && set.type == Type.ARRAYSET){ + return new VertexSet(arrayset.intersectWith(set.arrayset)); + } + if(type == Type.XBITSET && set.type == Type.XBITSET){ + return new VertexSet(xbitset.intersectWith(set.xbitset)); + } + if(type == Type.ARRAYSET){ + return new VertexSet(xBitSetOf(arrayset).intersectWith(set.xbitset)); + } + else{ + return new VertexSet(xbitset.intersectWith(xBitSetOf(set.arrayset))); + } + } + + public boolean isSubset(VertexSet set){ + if(type == Type.ARRAYSET && set.type == Type.ARRAYSET){ + return arrayset.isSubset(set.arrayset); + } + if(type == Type.XBITSET && set.type == Type.XBITSET){ + return xbitset.isSubset(set.xbitset); + } + if(type == Type.ARRAYSET){ + return xBitSetOf(arrayset).isSubset(set.xbitset); + } + else{ + return xbitset.isSubset(xBitSetOf(set.arrayset)); + } + } + + public boolean isDisjoint(VertexSet set){ + if(type == Type.ARRAYSET && set.type == Type.ARRAYSET){ + return arrayset.isDisjoint(set.arrayset); + } + if(type == Type.XBITSET && set.type == Type.XBITSET){ + return xbitset.isDisjoint(set.xbitset); + } + if(type == Type.ARRAYSET){ + return xBitSetOf(arrayset).isDisjoint(set.xbitset); + } + else{ + return xbitset.isDisjoint(xBitSetOf(set.arrayset)); + } + } + + public boolean isEmpty(){ + if(type == Type.ARRAYSET){ + return arrayset.isEmpty(); + } + else{ + return xbitset.isEmpty(); + } + } + + public boolean isSuperset(VertexSet set){ + if(type == Type.ARRAYSET && set.type == Type.ARRAYSET){ + return arrayset.isSuperset(set.arrayset); + } + if(type == Type.XBITSET && set.type == Type.XBITSET){ + return xbitset.isSuperset(set.xbitset); + } + if(type == Type.ARRAYSET){ + return xBitSetOf(arrayset).isSuperset(set.xbitset); + } + else{ + return xbitset.isSuperset(xBitSetOf(set.arrayset)); + } + } + + public int length(){ + if(type == Type.ARRAYSET){ + return arrayset.length(); + } + else{ + return xbitset.length(); + } + } + + public int nextClearBit(int fromIndex){ + if(type == Type.ARRAYSET){ + return arrayset.nextClearBit(fromIndex); + } + else{ + return xbitset.nextClearBit(fromIndex); + } + } + + public int nextSetBit(int fromIndex){ + if(type == Type.ARRAYSET){ + return arrayset.nextSetBit(fromIndex); + } + else{ + return xbitset.nextSetBit(fromIndex); + } + } + + public void or(VertexSet set){ + if(type == Type.ARRAYSET && set.type == Type.ARRAYSET){ + arrayset.or(set.arrayset); + ensureType(); + return; + } + if(type == Type.XBITSET && set.type == Type.XBITSET){ + xbitset.or(set.xbitset); + ensureType(); + return; + } + if(type == Type.ARRAYSET){ + toXBitSet(); + } + else{ + set.toXBitSet(); + } + xbitset.or(set.xbitset); + set.ensureType(); + ensureType(); + } + + public int previousClearBit(int fromIndex){ + throw new UnsupportedOperationException(); + } + + public int previousSetBit(int fromIndex){ + throw new UnsupportedOperationException(); + } + + public void set(int i){ + if(type == Type.ARRAYSET){ + arrayset.set(i); + } + else{ + xbitset.set(i); + } + ensureType(); + } + + public void set(int i, boolean value){ + if(type == Type.ARRAYSET){ + arrayset.set(i, value); + } + else{ + xbitset.set(i, value); + } + ensureType(); + } + + public void set(int fromIndex, int toIndex){ + if(type == Type.ARRAYSET){ + arrayset.set(fromIndex, toIndex); + } + else{ + xbitset.set(fromIndex, toIndex); + } + ensureType(); + } + + public void set(int fromIndex, int toIndex, boolean value){ + if(type == Type.ARRAYSET){ + arrayset.set(fromIndex, toIndex, value); + } + else{ + xbitset.set(fromIndex, toIndex, value); + } + ensureType(); + } + + public int size(){ + throw new UnsupportedOperationException(); + } + + public VertexSet subtract(VertexSet set){ + if(type == Type.ARRAYSET && set.type == Type.ARRAYSET){ + return new VertexSet(arrayset.subtract(set.arrayset)); + } + if(type == Type.XBITSET && set.type == Type.XBITSET){ + return new VertexSet(xbitset.subtract(set.xbitset)); + } + if(type == Type.ARRAYSET){ + return new VertexSet(xBitSetOf(arrayset).subtract(set.xbitset)); + } + else{ + return new VertexSet(xbitset.subtract(xBitSetOf(set.arrayset))); + } + } + + public int[] toArray(){ + if(type == Type.ARRAYSET){ + return arrayset.toArray(); + } + else{ + return xbitset.toArray(); + } + } + + public byte[] toByteArray(){ + if(type == Type.ARRAYSET){ + return arrayset.toByteArray(); + } + else{ + return xbitset.toByteArray(); + } + } + + public long[] toLongArray(){ + if(type == Type.ARRAYSET){ + return arrayset.toLongArray(); + } + else{ + return xbitset.toLongArray(); + } + } + + @Override + public String toString(){ + if(type == Type.ARRAYSET){ + return arrayset.toString(); + } + else{ + return xbitset.toString(); + } + } + + public VertexSet unionWith(VertexSet set){ + if(type == Type.ARRAYSET && set.type == Type.ARRAYSET){ + return new VertexSet(arrayset.unionWith(set.arrayset)); + } + if(type == Type.XBITSET && set.type == Type.XBITSET){ + return new VertexSet(xbitset.unionWith(set.xbitset)); + } + if(type == Type.ARRAYSET){ + return new VertexSet(xBitSetOf(arrayset).unionWith(set.xbitset)); + } + else{ + return new VertexSet(xbitset.unionWith(xBitSetOf(set.arrayset))); + } + } + + public static VertexSet valueOf(byte[] bytes){ + throw new UnsupportedOperationException(); + } + + public static VertexSet valueOf(long[] longs){ + throw new UnsupportedOperationException(); + } + + public void xor(VertexSet set){ + if(type == Type.ARRAYSET && set.type == Type.ARRAYSET){ + arrayset.xor(set.arrayset); + ensureType(); + return; + } + if(type == Type.XBITSET && set.type == Type.XBITSET){ + xbitset.xor(set.xbitset); + ensureType(); + return; + } + if(type == Type.ARRAYSET){ + toXBitSet(); + } + else{ + set.toXBitSet(); + } + xbitset.xor(set.xbitset); + set.ensureType(); + ensureType(); + } + + @Override + public int compareTo(VertexSet vs){ + if(type == Type.ARRAYSET && vs.type == Type.ARRAYSET){ + return arrayset.compareTo(vs.arrayset); + } + if(type == Type.XBITSET && vs.type == Type.XBITSET){ + return xbitset.compareTo(vs.xbitset); + } + if(type == Type.ARRAYSET){ + return xBitSetOf(arrayset).compareTo(vs.xbitset); + } + else{ + return xbitset.compareTo(xBitSetOf(vs.arrayset)); + } + } + + public void checkTypeValidity(){ + if(type == Type.ARRAYSET){ + assert(arrayset != null); + assert(xbitset == null); + assert(arrayset.cardinality() <= TH1); + } + else{ + assert(xbitset != null); + assert(arrayset == null); + assert(xbitset.cardinality() > TH1); + } + } +} diff --git a/solvers/TCS-Meiji/tw/heuristic/XBitSet.java b/solvers/TCS-Meiji/tw/heuristic/XBitSet.java new file mode 100644 index 0000000..c6cb04b --- /dev/null +++ b/solvers/TCS-Meiji/tw/heuristic/XBitSet.java @@ -0,0 +1,308 @@ +/* + * Copyright (c) 2016, Hisao Tamaki + */ +package tw.heuristic; + +import java.util.BitSet; +import java.util.Comparator; + +/** + * This class extends {@code java.util.BitSet} which implements + * a variable length bit vector. + * The main purpose is to provide methods that create + * a new vector as a result of a set operation such as + * union and intersection, rather than modifying the + * existing one. See API documentation for {@code java.util.BitSet}. + * + * @author Hisao Tamaki + */ + +public class XBitSet extends BitSet +implements Comparable{ + + /** + * Creates an empty {@code XBitSet}. + */ + public XBitSet() { + super(); + } + + /** + * Creates an empty {@code XBitSet} whose initial size is large enough to explicitly + * contain members smaller than {@code n}. + * + * @param n the initial size of the {@code XBitSet} + * @throws NegativeArraySizeException if the specified initial size + * is negative + */ + public XBitSet(int n) { + super(n); + } + + /** + * Creates an {@code XBitSet} with members provided by an array + * + * @param a an array of members to be in the {@code XBitSet} + */ + public XBitSet(int a[]) { + super(); + for (int i = 0; i < a.length; i++) { + set(a[i]); + } + } + + /** + * Creates an {@code XBitSet} with members provided by an array. + * The initial size is large enough to explicitly + * contain members smaller than {@code n}. + * + * @param n the initial size of the {@code XBitSet} + * @param a an array of indices where the bits should be set + * @throws NegativeArraySizeException if the specified initial size + * is negative + */ + public XBitSet(int n, int a[]) { + super(n); + for (int i = 0; i < a.length; i++) { + set(a[i]); + } + } + + /** + * Returns {@code true} if this target {@code XBitSet} is a subset + * of the argument {@code XBitSet} + * + * @param set an {@code XBitSet} + * @return boolean indicating whether this {@code XBitSet} is a subset + * of the argument {@code XBitSet} + */ + public boolean isSubset(XBitSet set) { + BitSet tmp = (BitSet) this.clone(); + tmp.andNot(set); + return tmp.isEmpty(); + } + + /** + * Returns {@code true} if this target {@code XBitSet} is disjoint + * from the argument {@code XBitSet} + * + * @param set an {@code XBitSet} + * @return boolean indicating whether this {@code XBitSet} is + * disjoint from the argument {@code XBitSet} + */ + public boolean isDisjoint(XBitSet set) { + BitSet tmp = (BitSet) this.clone(); + tmp.and(set); + return tmp.isEmpty(); + } + + /** + * Returns {@code true} if this target {@code XBitSet} has a + * non-empty intersection with the argument {@code XBitSet} + * + * @param set an {@code XBitSet} + * @return boolean indicating whether this {@code XBitSet} + * intersects with the argument {@code XBitSet} + */ + + public boolean intersects(XBitSet set) { + return super.intersects(set); + } + + /** + * Returns {@code true} if this target {@code XBitSet} is a superset + * of the argument bit set + * + * @param set an {@code XBitSet} + * @return boolean indicating whether this {@code XBitSet} is a superset + * of the argument {@code XBitSet} + */ + public boolean isSuperset(XBitSet set) { + BitSet tmp = (BitSet) set.clone(); + tmp.andNot(this); + return tmp.isEmpty(); + } + + /** + * Returns a {@code XBitSet} that is the union of this + * target {@code XBitSet} and the argument {@code XBitSet} + * + * @param set an {@code XBitSet} + * @return the union {@code XBitSet} + */ + public XBitSet unionWith(XBitSet set) { + XBitSet result = (XBitSet) this.clone(); + result.or(set); + return result; + } + + /** + * Returns an {@code XBitSet} that is the intersection of this + * target {@code XBitSet} and the argument {@code XBitSet} + * + * @param set an {@code XBitSet} + * @return the intersection {@code XBitSet} + */ + public XBitSet intersectWith(XBitSet set) { + XBitSet result = (XBitSet) this.clone(); + result.and(set); + return result; + } + + /** + * Returns an {@code XBitSet} that is the result of + * of removing the members of the argument {@code XBitSet} + * from the target {@code XBitSet}. + * @param set an {@code XBitSet} + * @return the difference {@code XBitSet} + */ + public XBitSet subtract(XBitSet set) { + XBitSet result = (XBitSet) this.clone(); + result.andNot(set); + return result; + } + + /** + * Returns {@code true} if the target {@code XBitSet} has a member + * that is smaller than the smallest member of the argument {@code XBitSet}. + * Both the target and the argument {@code XBitSet} must be non-empty + * to ensure a meaningful result. + * @param set an {@code XBitSet} + * @return {@code true} if the target {@code XBitSet} has a member + * smaller than the smallest member of the argument {@code XBitSet}; + * {@code false} otherwise + */ + public boolean hasSmaller(XBitSet set) { + assert !isEmpty() && !set.isEmpty(); + return this.nextSetBit(0) < set.nextSetBit(0); + } + + @Override + /** + * Compare the target {@code XBitSet} with the argument + * {@code XBitSet}, where the bit vectors are viewed as + * binary representation of an integer, the bit {@code i} + * set meaning that the number contains {@code 2^i}. + * @return negative value if the target is smaller, positive if it is + * larger, and zero if it equals the argument + */ + public int compareTo(XBitSet set) { + int l1 = this.length(); + int l2 = set.length(); + if (l1 != l2) { + return l1 - l2; + } + for (int i = l1 - 1; i >= 0; i--) { + if (this.get(i) && !set.get(i)) return 1; + else if (!this.get(i) && set.get(i)) return -1; + } + return 0; + } + + /** + * Converts the target {@code XBitSet} into an array + * that contains all the members in the set + * @return the array representation of the set + */ + public int[] toArray() { + int[] result = new int[cardinality()]; + int k = 0; + for (int i = nextSetBit(0); i >=0; i= nextSetBit(i + 1)) { + result[k++] = i; + } + return result; + } + + /** + * Checks if this target bit set has an element + * that is smaller than every element in + * the argument bit set + * @param vs bit set + * @return {@code true} if this bit set has an element + * smaller than every element in {@code vs} + */ + public boolean hasSmallerVertexThan(XBitSet vs) { + if (this.isEmpty()) return false; + else if (vs.isEmpty()) return true; + else return nextSetBit(0) < vs.nextSetBit(0); + } + + /** + * holds the reference to an instance of the {@code DescendingComparator} + * for {@code BitSet} + */ + public static final Comparator descendingComparator = + new DescendingComparator(); + + /** + * holds the reference to an instance of the {@code AscendingComparator} + * for {@code BitSet} + */ + public static final Comparator ascendingComparator = + new AscendingComparator(); + + /** + * holds the reference to an instance of the {@code CardinalityComparator} + * for {@code BitSet} + */ + public static final Comparator cardinalityComparator = + new CardinalityComparator(); + + /** + * A comparator for {@code BitSet}. The {@code compare} + * method compares the two vectors in the lexicographic order + * where the highest bit is the most significant. + */ + public static class DescendingComparator implements Comparator { + @Override + public int compare(BitSet s1, BitSet s2) { + int l1 = s1.length(); + int l2 = s2.length(); + if (l1 != l2) { + return l1 - l2; + } + for (int i = l1 - 1; i >= 0; i--) { + if (s1.get(i) && !s2.get(i)) return 1; + else if (!s1.get(i) && s2.get(i)) return -1; + } + return 0; + } + } + + /** + * A comparator for {@code BitSet}. The {@code compare} method compares + * the two vectors in the lexicographic order where the + * lowest bit is the most significant. + */ + public static class AscendingComparator implements Comparator { + @Override + public int compare(BitSet s1, BitSet s2) { + int l1 = s1.length(); + int l2 = s2.length(); + + for (int i = 0; i < Math.min(l1, l2); i++) { + if (s1.get(i) && !s2.get(i)) return 1; + else if (!s1.get(i) && s2.get(i)) return -1; + } + return l1 - l2; + } + } + + /** + * A comparator for {@code BitSet}. The {@code compare} method compares + * the two sets in terms of the cardinality. In case of + * a tie, the two sets are compared by the {@code AscendingComparator} + */ + public static class CardinalityComparator implements Comparator { + @Override + public int compare(BitSet s1, BitSet s2) { + int c1 = s1.cardinality(); + int c2 = s2.cardinality(); + if (c1 != c2) { + return c1 - c2; + } + else + return ascendingComparator.compare(s1, s2); + } + } +} diff --git a/solvers/TCS-Meiji/validate b/solvers/TCS-Meiji/validate new file mode 100644 index 0000000..ed2a23b --- /dev/null +++ b/solvers/TCS-Meiji/validate @@ -0,0 +1,9 @@ +#!/bin/sh +# +for infile in `ls test_instance` +do + file=${infile%.*} + outfile="$file.td" + echo $file + td-validate-master/td-validate "test_instance/$infile" "test_result/$outfile" +done \ No newline at end of file diff --git a/solvers/flow-cutter-pace17/LICENSE b/solvers/flow-cutter-pace17/LICENSE new file mode 100644 index 0000000..a8923c7 --- /dev/null +++ b/solvers/flow-cutter-pace17/LICENSE @@ -0,0 +1,22 @@ +Copyright (c) 2016, Ben Strasser +All rights reserved. + +Redistribution and use in source and binary forms, with or without modification, +are permitted provided that the following conditions are met: + +Redistributions of source code must retain the above copyright notice, this list +of conditions and the following disclaimer. +Redistributions in binary form must reproduce the above copyright notice, this +list of conditions and the following disclaimer in the documentation and/or +other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR +ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON +ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/solvers/flow-cutter-pace17/Makefile b/solvers/flow-cutter-pace17/Makefile new file mode 100644 index 0000000..bd7a9cc --- /dev/null +++ b/solvers/flow-cutter-pace17/Makefile @@ -0,0 +1,11 @@ + +all: flow_cutter_pace17 + +flow_cutter_pace17: src/* + g++ -Wall -std=c++11 -O3 -DNDEBUG src/*.cpp -o flow_cutter_pace17 + +.PHONY : clean + +clean: + rm flow_cutter_pace17 + diff --git a/solvers/flow-cutter-pace17/README.md b/solvers/flow-cutter-pace17/README.md new file mode 100644 index 0000000..d889bec --- /dev/null +++ b/solvers/flow-cutter-pace17/README.md @@ -0,0 +1,45 @@ +# FlowCutter PACE 2017 Submission + +This repository contains the FlowCutter code submitted to the [PACE 2017](https://pacechallenge.wordpress.com/2016/12/01/announcing-pace-2017/) tree decomposition challenge. +FlowCutter was developed at [KIT](https://www.kit.edu) in the [group of Prof. Dorothea Wagner](https://i11www.iti.kit.edu/). + +If you are running a Unix-like system, then getting started is very simple. Just clone the repository and build the programs, as follows: + +```bash +git clone https://github.com/kit-algo/flow-cutter-pace17.git +cd flow-cutter-pace17 +./build.sh +``` + +There are no dependencies beyond a GCC with version 4.8 or newer. Clang should also work but has not been tested by us. Building the code under Windows probably requires a few code modifications in `pace.cpp`. + +After executing the build script, the root directory of the repository should contain the two binary files `flow_cutter_pace17` and `flow_cutter_parallel_pace17`. These are the programs entered into the heuristic, sequential and heuristic, parallel tracks of the competition. The outputted decompositions are guaranteed to be valid but do not necessarily have a minimum width. Both executable have the same interface. + +There are three ways to correctly invoke the program: + +```bash +./flow_cutter_pace17 < my_graph.gr +./flow_cutter_pace17 my_graph.gr +./flow_cutter_pace17 -s 42 < my_graph.gr +``` + +The first and the last commands read the input graph from the standard input. The second command reads it from a file whose name is given as parameter. The `-s` parameter sets the random seed. By default a seed of 0 is assumed. We tried to make sure that given the same seed, the behaviour of the sequential binary should be the identical even accross compilers. + +The executables run until either a SIGINT or SIGTERM signal is sent. Once this signal is encountered the programm prints a tree decomposition to the standard output with the smallest width that it could found and terminates. Note that no decomposition is outputted if you send the signal before any decomposition is found. + +The format specification of the input graph and output decompositions follow those of the [PACE 2017](https://pacechallenge.wordpress.com/2016/12/01/announcing-pace-2017/) challenge. + +**Warning:** The FlowCutter PACE 2017 code only optimizes the tree width. If you want to optimize different criteria such as the fill-in, the [FlowCutter PACE 2016 code](https://github.com/ben-strasser/flow-cutter-pace16) will probably give better results. + +## Publications + +Please cite the following article if you use our code in a publication: + +* Graph Bisection with Pareto-Optimization. + Michael Hamann and Ben Strasser. + Proceedings of the 18th Meeting on Algorithm Engineering and Experiments (ALENEX'16). + +## Contact + +Please send an email to Ben Strasser (strasser (at) kit (dot) edu). + diff --git a/solvers/flow-cutter-pace17/build.sh b/solvers/flow-cutter-pace17/build.sh new file mode 100644 index 0000000..488c53d --- /dev/null +++ b/solvers/flow-cutter-pace17/build.sh @@ -0,0 +1,3 @@ +#!/bin/sh +g++ -Wall -std=c++11 -O3 -DNDEBUG src/*.cpp -o flow_cutter_pace17 + diff --git a/solvers/flow-cutter-pace17/src/array_id_func.h b/solvers/flow-cutter-pace17/src/array_id_func.h new file mode 100644 index 0000000..ee0954e --- /dev/null +++ b/solvers/flow-cutter-pace17/src/array_id_func.h @@ -0,0 +1,188 @@ +#ifndef ARRAY_ID_FUNC_H +#define ARRAY_ID_FUNC_H + +#include "id_func.h" +#include +#include +#include + +template +class ArrayIDFunc{ +public: + ArrayIDFunc()noexcept:preimage_count_(0), data_(nullptr){} + + explicit ArrayIDFunc(int preimage_count) + :preimage_count_(preimage_count){ + assert(preimage_count >= 0 && "ids may not be negative"); + if(preimage_count == 0) + data_ = nullptr; + else + data_ = new T[preimage_count_]; + } + + template + ArrayIDFunc(const IDFunc&o) + :preimage_count_(o.preimage_count()){ + if(preimage_count_ == 0) + data_ = nullptr; + else{ + data_ = new T[preimage_count_]; + try{ + for(int id=0; id + typename std::enable_if< + is_id_func::value && + std::is_convertible::type, T>::value, + ArrayIDFunc&>::type operator=(const IDFunc&o){ + ArrayIDFunc(o).swap(*this); + return *this; + } + + ArrayIDFunc&operator=(const ArrayIDFunc&o){ + ArrayIDFunc(o).swap(*this); + return *this; + } + + ArrayIDFunc&operator=(ArrayIDFunc&&o)noexcept{ + this->~ArrayIDFunc(); + data_ = nullptr; + preimage_count_ = 0; + swap(o); + return *this; + } + + // IDFunc + int preimage_count() const{return preimage_count_;} + + const T&operator()(int id) const{ + assert(0 <= id && id < preimage_count_ && "id out of bounds"); + return data_[id]; + } + + // Mutable IDFunc + void set(int id, T t){ + assert(0 <= id && id < preimage_count_ && "id out of bounds"); + data_[id] = std::move(t); + } + + T move(int id){ + assert(0 <= id && id < preimage_count_ && "id out of bounds"); + return std::move(data_[id]); + } + + void fill(const T&t){ + std::fill(data_, data_+preimage_count_, t); + } + + // Array only functionality + T&operator[](int id){ + assert(0 <= id && id < preimage_count_ && "id out of bounds"); + return data_[id]; + } + + const T&operator[](int id) const{ + assert(0 <= id && id < preimage_count_ && "id out of bounds"); + return data_[id]; + } + + T*begin(){ return data_; } + const T*begin() const{ return data_; } + T*end(){ return data_ + preimage_count_; } + const T*end()const{ return data_ + preimage_count_; } + + int preimage_count_; + T*data_; +}; + +struct ArrayIDIDFunc : public ArrayIDFunc{ + + ArrayIDIDFunc()noexcept :image_count_(0){} + + ArrayIDIDFunc(int preimage_count, int image_count) + :ArrayIDFunc(preimage_count), image_count_(image_count){} + + ArrayIDIDFunc(const ArrayIDIDFunc&o) = default; + ArrayIDIDFunc(ArrayIDIDFunc&&) = default; + ArrayIDIDFunc&operator=(const ArrayIDIDFunc&) = default; + ArrayIDIDFunc&operator=(ArrayIDIDFunc&&) = default; + + void swap(ArrayIDIDFunc&o){ + std::swap(image_count_, o.image_count_); + ArrayIDFunc::swap(static_cast&>(o)); + } + + template + ArrayIDIDFunc(const IDFunc&f, int image_count_) + : ArrayIDFunc(f), image_count_(image_count_){} + + + template + ArrayIDIDFunc(const IDIDFunc&f/*, + typename std::enable_if::value, void>::type*dummy=0*/) + : ArrayIDFunc(f), image_count_(f.image_count()){} + + template + typename std::enable_if< + is_id_id_func::value, + ArrayIDIDFunc& + >::type operator=(const IDIDFunc&o){ + ArrayIDIDFunc(o).swap(*this); + return *this; + } + + int image_count()const { return image_count_; } + + int operator()(int x) const{ + assert(0 <= x && x < preimage_count_ && "preimage id out of bounds"); + int y = data_[x]; + assert(0 <= y && y < image_count_ && "image id out of bounds"); + return y; + } + + void set_image_count(int new_image_count){ + image_count_ = new_image_count; + } + + int image_count_; +}; + +#endif + diff --git a/solvers/flow-cutter-pace17/src/back_arc.h b/solvers/flow-cutter-pace17/src/back_arc.h new file mode 100644 index 0000000..6990036 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/back_arc.h @@ -0,0 +1,51 @@ +#ifndef BACK_ARC_H +#define BACK_ARC_H + +#include "array_id_func.h" +#include "id_sort.h" + +// Input graph must be symmetric +template +ArrayIDIDFunc compute_back_arc_permutation(const Tail&tail, const Head&head){ + + const int arc_count = head.preimage_count(); + const int node_count = head.image_count(); + + struct D{ + int tail, head, arc_id; + }; + + ArrayIDFuncarc_list(arc_count), tmp(arc_count); + for(int i=0; i arc_list[i].head) + std::swap(arc_list[i].tail, arc_list[i].head); + arc_list[i].arc_id = i; + } + + stable_sort_copy_by_id( + std::begin(arc_list), std::end(arc_list), + std::begin(tmp), + node_count, + [](D d){return d.head;} + ); + stable_sort_copy_by_id( + std::begin(tmp), std::end(tmp), + std::begin(arc_list), + node_count, + [](D d){return d.tail;} + ); + + ArrayIDIDFunc back_arc(head.preimage_count(), head.preimage_count()); + + for(int i=0; i + +#ifndef NDEBUG +#include "union_find.h" +#endif + + +#include "list_graph.h" + +namespace cch_order{ + + inline + bool is_valid_partial_order(const ArrayIDIDFunc&partial_order){ + if(partial_order.preimage_count() == 0) + return true; + else + return max_over_id_func(compute_histogram(partial_order)) <= 1; + } + + // Computes an optimal order for a graph consisting of only a path + template + ArrayIDIDFunc compute_path_graph_order(int node_count, const InputNodeID&input_node_id){ + ArrayIDIDFunc order(node_count, input_node_id.image_count()); + int pos = 0; + for(int i=1; i<=node_count; i*=2){ + for(int j=i-1; j + ArrayIDIDFunc compute_tree_graph_order(const Tail&tail, const Head&head, const InputNodeID&input_node_id){ + const int node_count = tail.image_count(); + const int arc_count = tail.preimage_count(); + + assert(is_connected(tail, head)); + assert(2*(node_count-1) == arc_count); + + auto out_arc = invert_id_id_func(tail); + + ArrayIDFuncdeg = id_func(node_count, [](int){return 0;}); + for(int xy=0; xy + ArrayIDIDFunc compute_tree_graph_order(const Tail&tail, const Head&head){ + return compute_tree_graph_order( + tail, head, + id_id_func( + tail.image_count(), tail.image_count(), + [](int x){return x;} + ) + ); + } + + // Computes an optimal order for a trivial graph. If the input graph is not trivial, then the task is forwarded to the compute_non_trivial_graph_order functor parameter. + // A graph is trivial if it is a clique or a tree. + // + // Precondition: the graph is connected + template + ArrayIDIDFunc compute_trivial_graph_order_if_graph_is_trivial( + ArrayIDIDFunc tail, ArrayIDIDFunc head, + ArrayIDIDFunc input_node_id, + const ComputeNonTrivialGraphOrder&compute_non_trivial_graph_order + ){ + const int node_count = tail.image_count(); + const int arc_count = tail.preimage_count(); + + assert(is_connected(tail, head)); + + bool + is_clique = (static_cast(node_count)*static_cast(node_count-1) == static_cast(arc_count)), + has_no_arcs = (arc_count == 0), + is_tree = (arc_count == 2*(node_count-1)); + + ArrayIDIDFunc order; + + + if(is_clique || has_no_arcs){ + order = id_id_func(node_count, input_node_id.image_count(), [&](int x){return input_node_id(x);}); + }else if(is_tree){ + order = compute_tree_graph_order(std::move(tail), std::move(head), std::move(input_node_id)); + }else { + order = compute_non_trivial_graph_order(std::move(tail), std::move(head), std::move(input_node_id)); + } + + assert(is_valid_partial_order(order)); + return order; // NVRO + } + + // This function internally reorders the nodes in preorder, then recurses on each component of the graph. + // should_place_node_at_the_end_of_the_order is called with the id of some node of the component and the function should decide + // whether this component is placed at the end of the order or at the front. + // If the relative component order does not matter, then let should_place_node_at_the_end_of_the_order always return false. + // + // compute_connected_graph_order should order the nodes in each component. The order should map node IDs in the graph that is given to input node IDs. + template + ArrayIDIDFunc reorder_nodes_in_preorder_and_compute_unconnected_graph_order_if_component_is_non_trivial( + ArrayIDIDFunc tail, ArrayIDIDFunc head, + ArrayIDIDFunc input_node_id, + const ComputeConnectedGraphOrder&compute_connected_graph_order, + const ShouldPlaceNodeAtTheEndOfTheOrder&should_place_node_at_the_end_of_the_order + ){ + const int node_count = tail.image_count(); + const int arc_count = tail.preimage_count(); + + // We first reorder the graph nodes in preorder + + auto preorder = compute_preorder(compute_successor_function(tail, head)); + + { + auto inv_preorder = inverse_permutation(preorder); + tail = chain(std::move(tail), inv_preorder); + head = chain(std::move(head), inv_preorder); + input_node_id = chain(preorder, std::move(input_node_id)); + } + + // We then sort the arcs accordingly + + { + auto p = sort_arcs_first_by_tail_second_by_head(tail, head); + tail = chain(p, std::move(tail)); + head = chain(p, std::move(head)); + } + + assert(is_symmetric(tail, head)); + + ArrayIDIDFunc order(node_count, input_node_id.image_count()); + int order_begin = 0; + int order_end = node_count; + + // By reordering the nodes in preorder, we can guarentee, that the nodes of every component are from a coninous range. + // As we sorted the arcs this is also true for the arcs. + + + // The following function is called for every component. The components are identified afterwards. + auto on_new_component = [&](int node_begin, int node_end, int arc_begin, int arc_end){ + auto sub_node_count = node_end - node_begin; + auto sub_arc_count = arc_end - arc_begin; + + auto sub_tail = id_id_func( + sub_arc_count, sub_node_count, + [&](int x){ + return tail(arc_begin + x) - node_begin; + } + ); + auto sub_head = id_id_func( + sub_arc_count, sub_node_count, + [&](int x){ + return head(arc_begin + x) - node_begin; + } + ); + auto sub_input_node_id = id_id_func( + sub_node_count, input_node_id.image_count(), + [&](int x){ + return input_node_id(node_begin + x); + } + ); + + + assert(is_symmetric(sub_tail, sub_head)); + assert(!has_multi_arcs(sub_tail, sub_head)); + assert(is_loop_free(sub_tail, sub_head)); + + + auto sub_order = compute_trivial_graph_order_if_graph_is_trivial(sub_tail, sub_head, sub_input_node_id, compute_connected_graph_order); + + #ifndef NDEBUG + if(node_begin != node_end) + { + bool r = should_place_node_at_the_end_of_the_order(preorder(node_begin)); + for(int x=node_begin; x + ArrayIDIDFunc compute_nested_dissection_graph_order( + ArrayIDIDFunc tail, ArrayIDIDFunc head, + ArrayIDIDFunc input_node_id, + const ComputeSeparator&compute_separator, + const ComputePartOrder&compute_graph_part_order + ){ + const int node_count = tail.image_count(); + const int arc_count = tail.preimage_count(); + + auto separator = compute_separator(tail, head, input_node_id); + assert(separator.size() > 0); + + BitIDFunc in_separator(node_count); + in_separator.fill(false); + for(auto x:separator) + in_separator.set(x, true); + + BitIDFunc keep_arc_flag = id_func( + arc_count, + [&](int a){ + return in_separator(tail(a)) == in_separator(head(a)); + } + ); + + if((int)separator.size() == node_count){ + keep_arc_flag.fill(false); + } + + + int new_arc_count = count_true(keep_arc_flag); + tail = keep_if(keep_arc_flag, new_arc_count, std::move(tail)); + head = keep_if(keep_arc_flag, new_arc_count, std::move(head)); + + assert(is_symmetric(tail, head)); + + return reorder_nodes_in_preorder_and_compute_unconnected_graph_order_if_component_is_non_trivial( + std::move(tail), std::move(head), + std::move(input_node_id), + compute_graph_part_order, std::move(in_separator) + ); + } + + template + ArrayIDIDFunc compute_nested_dissection_graph_order( + ArrayIDIDFunc tail, ArrayIDIDFunc head, + ArrayIDIDFunc input_node_id, + const ComputeSeparator&compute_separator + ){ + auto compute_graph_part_order = [&]( + ArrayIDIDFunc a_tail, ArrayIDIDFunc a_head, + ArrayIDIDFunc a_input_node_id + ){ + return compute_nested_dissection_graph_order( + std::move(a_tail), std::move(a_head), + std::move(a_input_node_id), + compute_separator + ); + }; + return compute_nested_dissection_graph_order(tail, head, input_node_id, compute_separator, compute_graph_part_order); + } + + template + ArrayIDIDFunc compute_graph_order_with_large_degree_three_independent_set_at_the_begin( + ArrayIDIDFunc tail, ArrayIDIDFunc head, + ArrayIDIDFunc input_node_id, + const ComputeCoreGraphOrder&compute_core_graph_order + ){ + const int node_count = tail.image_count(); + int arc_count = tail.preimage_count(); + + BitIDFunc in_independent_set(node_count); + in_independent_set.fill(false); + + auto inv_tail = invert_sorted_id_id_func(tail); + auto degree = id_func( + node_count, + [&](int x){ + return inv_tail(x).end() - inv_tail(x).begin(); + } + ); + + auto back_arc = compute_back_arc_permutation(tail, head); + + for(auto x=0; x + ArrayIDIDFunc compute_graph_order_with_degree_two_chain_at_the_begin( + ArrayIDIDFunc tail, ArrayIDIDFunc head, + ArrayIDIDFunc input_node_id, + const ComputeCoreGraphOrder&compute_core_graph_order + ){ + const int node_count = tail.image_count(); + int arc_count = tail.preimage_count(); + + assert(tail.preimage_count() == arc_count); + assert(head.preimage_count() == arc_count); + assert(input_node_id.preimage_count() == node_count); + assert(tail.image_count() == node_count); + assert(head.image_count() == node_count); + + + //auto degree = compute_histogram(tail); + + assert(is_symmetric(tail, head)); + assert(!has_multi_arcs(tail, head)); + assert(is_loop_free(tail, head)); + + + BitIDFunc keep_flag(arc_count); + keep_flag.fill(true); + + auto inv_tail = invert_sorted_id_id_func(tail); + auto degree = id_func( + node_count, + [&](int x){ + return inv_tail(x).end() - inv_tail(x).begin(); + } + ); + + BitIDFunc node_in_core = id_func( + node_count, + [&](int x){ + return degree(x) > 2; + } + ); + + for(auto first_arc=0; first_arc 2 && degree(chain_now) <= 2){ + auto chain_prev = chain_begin; + + int arc_prev_to_now = first_arc; + + while(degree(chain_now) == 2){ + for(auto arc_now_to_next : inv_tail(chain_now)){ + auto chain_next = head(arc_now_to_next); + if(chain_next != chain_prev){ + + chain_prev = chain_now; + chain_now = chain_next; + arc_prev_to_now = arc_now_to_next; + break; + } + } + } + + assert(arc_prev_to_now != -1); + + auto chain_end = chain_now; + auto last_arc = arc_prev_to_now; + + assert(degree(chain_end) != 0); + + if(degree(chain_end) == 1){ + // Dead end, no shortcut needed + keep_flag.set(first_arc, false); + for(auto back_arc_for_first_arc:inv_tail(head(first_arc))){ + if(head(back_arc_for_first_arc) == tail(first_arc)){ + keep_flag.set(back_arc_for_first_arc, false); + break; + } + } + }else{ + if(chain_begin == chain_end){ + // The chain is a loop, no shortcut needed + keep_flag.set(first_arc, false); + keep_flag.set(last_arc, false); + }else{ + // A normal chain, shortcut needed + head[first_arc] = chain_end; + keep_flag.set(last_arc, false); + } + } + } + } + + // Remove arcs between chains and the rest graph + { + arc_count = count_true(keep_flag); + tail = keep_if(keep_flag, arc_count, std::move(tail)); + head = keep_if(keep_flag, arc_count, std::move(head)); + } + + // Remove multi arcs + { + keep_flag = identify_non_multi_arcs(tail, head); + arc_count = count_true(keep_flag); + tail = keep_if(keep_flag, arc_count, std::move(tail)); + head = keep_if(keep_flag, arc_count, std::move(head)); + } + + #ifndef NDEBUG + + assert(is_symmetric(tail, head)); + assert(!has_multi_arcs(tail, head)); + assert(is_loop_free(tail, head)); + + { + auto degree = compute_histogram(tail); + int not_in_core_count = 0; + for(int x=0; x + ArrayIDIDFunc compute_graph_order_with_largest_biconnected_component_at_the_end( + ArrayIDIDFunc tail, ArrayIDIDFunc head, + ArrayIDIDFunc input_node_id, + const ComputeConnectedGraphOrder&compute_component_graph_order + ){ + int node_count = tail.image_count(); + int arc_count = tail.preimage_count(); + + // Determine the nodes incident to largest biconnected component. + // Large in terms of many arcs. + + BitIDFunc node_in_largest_biconnected_component(node_count); + + { + auto out_arc = invert_sorted_id_id_func(tail); + auto back_arc = compute_back_arc_permutation(tail, head); + auto arc_component = compute_biconnected_components(out_arc, head, back_arc); + auto largest_component = max_preimage_over_id_func(compute_histogram(arc_component)); + node_in_largest_biconnected_component.fill(false); + for(int i=0; ibool{ + if(i!=0){ + if(tail(i-1) == tail(i) && head(i-1) == head(i)) + return false; + } + if(tail(i) == head(i)) + return false; + return true; + } + ); + int new_arc_count = count_true(keep_flag); + tail = keep_if(keep_flag, new_arc_count, std::move(tail)); + head = keep_if(keep_flag, new_arc_count, std::move(head)); + } + + assert(is_loop_free(tail, head)); + assert(!has_multi_arcs(tail, head)); + assert(is_symmetric(tail, head)); + } + + + template + ArrayIDIDFunc compute_nested_dissection_graph_order( + ArrayIDIDFunc tail, ArrayIDIDFunc head, + const ComputeSeparator&compute_separator + ){ + const int node_count = tail.image_count(); + + make_graph_simple(tail, head); + + auto input_node_id = identity_permutation(node_count); + + auto compute_order = [&]( + ArrayIDIDFunc a_tail, ArrayIDIDFunc a_head, + ArrayIDIDFunc a_input_node_id + ){ + return compute_nested_dissection_graph_order( + std::move(a_tail), std::move(a_head), std::move(a_input_node_id), + compute_separator + ); + }; + + auto order = reorder_nodes_in_preorder_and_compute_unconnected_graph_order_if_component_is_non_trivial( + std::move(tail), std::move(head), std::move(input_node_id), + compute_order, [](int){return false;} + ); + + assert(is_permutation(order)); + + return order; // NVRO + } + + template + ArrayIDIDFunc compute_cch_graph_order( + ArrayIDIDFunc tail, ArrayIDIDFunc head, + ArrayIDIDFunc input_node_id, + const ComputeSeparator&compute_separator + ){ + make_graph_simple(tail, head); + + auto orderer3 = [&]( + ArrayIDIDFunc a_tail, ArrayIDIDFunc a_head, + ArrayIDIDFunc a_input_node_id + ){ + return compute_nested_dissection_graph_order( + std::move(a_tail), std::move(a_head), std::move(a_input_node_id), + compute_separator + ); + }; + + auto orderer2 = [&]( + ArrayIDIDFunc a_tail, ArrayIDIDFunc a_head, + ArrayIDIDFunc a_input_node_id + ){ + return compute_graph_order_with_degree_two_chain_at_the_begin( + std::move(a_tail), std::move(a_head), std::move(a_input_node_id), + orderer3 + ); + }; + + auto orderer1 = [&]( + ArrayIDIDFunc a_tail, ArrayIDIDFunc a_head, + ArrayIDIDFunc a_input_node_id + ){ + return compute_graph_order_with_largest_biconnected_component_at_the_end( + std::move(a_tail), std::move(a_head), std::move(a_input_node_id), + orderer2 + ); + }; + + auto order = reorder_nodes_in_preorder_and_compute_unconnected_graph_order_if_component_is_non_trivial( + tail, head, input_node_id, + orderer1, [](int){return false;} + ); + + assert(is_permutation(order)); + + return order; // NVRO + } + + template + ArrayIDIDFunc compute_cch_graph_order( + ArrayIDIDFunc tail, ArrayIDIDFunc head, + const ComputeSeparator&compute_separator + ){ + return compute_cch_graph_order(std::move(tail), std::move(head), identity_permutation(tail.image_count()), compute_separator); + } +} +#endif diff --git a/solvers/flow-cutter-pace17/src/cell.cpp b/solvers/flow-cutter-pace17/src/cell.cpp new file mode 100644 index 0000000..7338ce0 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/cell.cpp @@ -0,0 +1,20 @@ +#include "cell.h" + +int get_treewidth_of_multilevel_partition(const std::vector&p){ + int tw = 0; + for(auto&x:p) + if(tw < x.bag_size()) + tw = x.bag_size(); + return tw; +} + +int get_node_count_of_multilevel_partition(const std::vector&p){ + int node_count = 0; + for(auto&x:p){ + for(auto&y:x.separator_node_list){ + if(node_count <= y) + node_count = y+1; + } + } + return node_count; +} diff --git a/solvers/flow-cutter-pace17/src/cell.h b/solvers/flow-cutter-pace17/src/cell.h new file mode 100644 index 0000000..d48e1fb --- /dev/null +++ b/solvers/flow-cutter-pace17/src/cell.h @@ -0,0 +1,26 @@ +#ifndef CELL_H +#define CELL_H + +#include + +struct Cell{ + std::vectorseparator_node_list; + std::vectorboundary_node_list; + + int parent_cell; + + int bag_size()const{ + return separator_node_list.size() + boundary_node_list.size(); + } +}; + +inline +bool operator<(const Cell&l, const Cell&r){ + return l.bag_size() < r.bag_size(); +} + +int get_treewidth_of_multilevel_partition(const std::vector&p); +int get_node_count_of_multilevel_partition(const std::vector&p); + +#endif + diff --git a/solvers/flow-cutter-pace17/src/chain.h b/solvers/flow-cutter-pace17/src/chain.h new file mode 100644 index 0000000..87525f6 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/chain.h @@ -0,0 +1,54 @@ +#ifndef CHAIN_H +#define CHAIN_H + +#include +#include "id_func.h" +#include "array_id_func.h" + +// chain(IDIDFunc, IDFunc) +template +typename std::enable_if< + is_id_id_func::value + && is_id_func::value + && !is_id_id_func::value, + ArrayIDFunc::type> +>::type +chain(const L&l, const R&r){ + ArrayIDFunc::type>result(l.preimage_count()); + for(int i=0; i +typename std::enable_if< + is_mutable_id_id_func::value + && is_id_id_func::value, + L +>::type +chain(L l, const R&r){ + assert(l.image_count() == r.preimage_count()); + for(int i=0; i +typename std::enable_if< + is_id_id_func::value + && !is_mutable_id_id_func::value + && is_id_id_func::value, + ArrayIDIDFunc +>::type +chain(const L&l, const R&r){ + assert(l.image_count() == r.preimage_count()); + ArrayIDIDFunc result(l.preimage_count(), r.image_count()); + for(int i=0; i +ArrayIDIDFunc compute_connected_components(const Tail&tail, const Head&head){ + const int node_count = tail.image_count(); + const int arc_count = tail.preimage_count(); + + UnionFind uf(node_count); + for(int i=0; i +bool is_connected(const Tail&tail, const Head&head){ + const int node_count = tail.image_count(); + const int arc_count = tail.preimage_count(); + + if(node_count == 0){ + return true; + } else { + UnionFind uf(node_count); + for(int i=0; i +void symmetric_depth_first_search( + const OutArc&out_arc, + const Head&head, + const OnRootFirstVisit&on_root_first_visit, + const OnRootLastVisit&on_root_last_visit, + const OnTreeUpArcVisit&on_tree_down_arc_visit, + const OnTreeDownArcVisit&on_tree_up_arc_visit, + const OnNonTreeArcVisit&on_non_tree_arc_visit +){ + const int arc_count = head.preimage_count(); + const int node_count = out_arc.preimage_count(); + + (void)arc_count; + (void)node_count; + + ArrayIDFunc dfs_stack(node_count); + int dfs_stack_end = 0; + + ArrayIDFunc parent_arc(node_count); + parent_arc.fill(-1); + + ArrayIDFunc parent_node(node_count); + parent_node.fill(-1); + + typedef typename std::decay::type Iter; + ArrayIDFuncnext_out(node_count); + for(int i=0; i +ArrayIDIDFunc compute_biconnected_components( + const OutArc&out_arc, const Head&head, const BackArc&back_arc +){ + const int node_count = out_arc.preimage_count(); + const int arc_count = head.preimage_count(); + + (void)arc_count; + (void)node_count; + + ArrayIDFunc arc_stack(arc_count); + int arc_stack_end = 0; + + ArrayIDIDFunc arc_component(arc_count, 0); + arc_component.fill(-1); + + ArrayIDFunc depth(node_count); + ArrayIDFunc min_succ_depth(node_count); + + auto min_to = [](int&x, int y){ + if(y < x) + x = y; + }; + + auto on_first_root_visit = [&](int x){ + depth[x] = 0; + }; + + auto on_last_root_visit = [&](int x){ + + }; + + auto on_tree_down_arc_visit = [&](int x, int xy, int y){ + arc_stack[arc_stack_end++] = xy; + + min_succ_depth[y] = std::numeric_limits::max(); + depth[y] = depth[x]+1; + }; + + auto on_tree_up_arc_visit = [&](int x, int xy, int y){ + arc_stack[arc_stack_end++] = xy; + + min_to(min_succ_depth[y], min_succ_depth[x]); + min_to(min_succ_depth[y], depth[x]); + + if(min_succ_depth[x] >= depth[y]){ + const int new_component_id = arc_component.image_count(); + arc_component.set_image_count(arc_component.image_count() + 1); + + while(arc_stack_end != 0){ + int ab = arc_stack[--arc_stack_end]; + int ba = back_arc(ab); + if(arc_component[ba] == -1){ + assert(arc_component[ab] == -1); + arc_component[ab] = new_component_id; + arc_component[ba] = new_component_id; + } + if(ba == xy) + break; + } + } + }; + + auto on_non_tree_arc_visit = [&](int x, int xy, int y){ + arc_stack[arc_stack_end++] = xy; + + min_to(min_succ_depth[x], depth[y]); + }; + + symmetric_depth_first_search( + out_arc, head, + on_first_root_visit, on_last_root_visit, + on_tree_down_arc_visit, on_tree_up_arc_visit, + on_non_tree_arc_visit + ); + + #ifndef NDEBUG + for(int i=0; i +bool is_biconnected(const Tail&tail, const Head&head){ + return compute_biconnected_components(invert_id_id_func(tail), head, compute_back_arc_permutation(tail, head)).image_count() <= 1; +} + +#endif + diff --git a/solvers/flow-cutter-pace17/src/contraction_graph.h b/solvers/flow-cutter-pace17/src/contraction_graph.h new file mode 100644 index 0000000..00da6c9 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/contraction_graph.h @@ -0,0 +1,279 @@ +#ifndef CONTRACTION_GRAPH_H +#define CONTRACTION_GRAPH_H + +#include "array_id_func.h" +#include "tiny_id_func.h" +#include "min_max.h" +#include "multi_arc.h" +#include +#include + +class EdgeContractionGraph{ +public: + void rewire_arcs_from_second_to_first(int u, int v){ + union_find_parent[v] = u; + std::swap(next_adjacency_in_ring[u], next_adjacency_in_ring[v]); + } + + template + void forall_nodes_in_last_computed_neighborhood(const F&f){ + for(int i=0; i + EdgeContractionGraph(const Tail&tail, const Head&head): + next_adjacency_in_ring(tail.image_count()), + union_find_parent(tail.image_count()), + out_arc_begin(tail.image_count()), + out_arc_end(tail.image_count()), + arc_head(tail.preimage_count()), + in_neighborhood(tail.image_count()), + neighborhood(tail.image_count()), + neighborhood_size(0) + { + assert(is_symmetric(tail, head)); + for(int i=0; i next_adjacency_in_ring; + ArrayIDFunc union_find_parent; + ArrayIDFunc out_arc_begin; + ArrayIDFunc out_arc_end; + ArrayIDFunc arc_head; + BitIDFunc in_neighborhood; + ArrayIDFunc neighborhood; + int neighborhood_size; +}; + +class NodeContractionGraph{ +public: + template + NodeContractionGraph(const Tail&tail, const Head&head): + g(tail, head), is_virtual(tail.image_count()){ + assert(is_symmetric(tail, head)); + is_virtual.fill(false); + } + + template + void forall_neighbors_then_contract_node(int v, const F&callback){ + g.compute_neighborhood_of(v); + g.forall_nodes_in_last_computed_neighborhood( + [&](int u){ + if(is_virtual(u)) + g.rewire_arcs_from_second_to_first(v, u); + } + ); + is_virtual.set(v, true); + g.compute_neighborhood_of(v); + g.forall_nodes_in_last_computed_neighborhood(callback); + } + +private: + EdgeContractionGraph g; + BitIDFunc is_virtual; +}; + +template +int compute_chordal_supergraph(const Tail&tail, const Head&head, const OnNewArc&on_new_arc){ + assert(is_symmetric(tail, head)); + + NodeContractionGraph g(tail, head); + int max_upward_degree = 0; + for(int x=0; x + SimpleNodeContractionAdjGraph(int node_count, const ArcList&arc_list): + g(node_count, arc_list), is_virtual(node_count, false), in_neighborhood(node_count, false){ + } + + int node_count()const{ + return g.node_count(); + } + + int was_node_contracted(int v)const{ + assert(0 <= v && v < node_count() && "v is out of bounds"); + return g.was_node_contracted(v) || is_virtual[v]; + } + + void contract_node(int v){ + assert(0 <= v && v < node_count() && "v is out of bounds"); + assert(!was_node_contracted(v)); + + is_virtual[v] = true; + for(auto h:g.out(v)) + if(is_virtual[h.target]) + g.contract_arc(v, h.target); + } + + std::vector out_followed_by_contract_node(int v){ + contract_node(v); + std::vector r = g.out(v); + if(r.size() == 1) + g.contract_arc(r[0].target, v); + return std::move(r); + } + + std::vectorout(int v)const{ + assert(0 <= v && v < node_count() && "v is out of bounds"); + if(was_node_contracted(v)) + return {}; + else{ + #ifdef EXPENSIVE_CONTRACTION_GRAPH_TESTS + for(auto x:in_neighborhood) + assert(!x); + #endif + std::vectorneighborhood; + for(auto u:g.out(v)){ + if(!is_virtual[u.target]){ + if(!in_neighborhood[u.target]){ + in_neighborhood[u.target] = true; + neighborhood.push_back(u); + } + }else{ + for(auto w:g.out(u.target)){ + assert(!is_virtual[w.target]); + if(w.target != v){ + if(!in_neighborhood[w.target]){ + in_neighborhood[w.target] = true; + neighborhood.push_back(w); + } + } + } + } + } + for(auto x:neighborhood) + in_neighborhood[x.target] = false; + #ifdef EXPENSIVE_CONTRACTION_GRAPH_TESTS + for(auto x:in_neighborhood) + assert(!x); + #endif + return std::move(neighborhood); + } + } + +private: + SimpleEdgeContractionAdjGraph g; + std::vectoris_virtual; + mutable std::vectorin_neighborhood; +}; + +template +std::vectorcompute_simple_contraction_hierarchy(const NodeList&node_list, const ArcList&arc_list){ + SimpleNodeContractionAdjGraph g(range_size(node_list), arc_list); + std::vectorshortcuts; + for(int v:node_list){ + for(auto u:g.out_followed_by_contract_node(v)) + shortcuts.push_back({v, u.target}); + } + + arc_sort(shortcuts.begin(), shortcuts.end()); + + return std::move(shortcuts); +}*/ + +#endif diff --git a/solvers/flow-cutter-pace17/src/count_range.h b/solvers/flow-cutter-pace17/src/count_range.h new file mode 100644 index 0000000..991a0d2 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/count_range.h @@ -0,0 +1,47 @@ +#ifndef COUNT_RANGE_H +#define COUNT_RANGE_H + +#include "range.h" +#include +#include + +struct CountIterator{ + typedef int value_type; + typedef int difference_type; + typedef const int* pointer; + typedef const int& reference; + typedef std::random_access_iterator_tag iterator_category; + + CountIterator&operator++(){ ++n_; return *this;} + CountIterator operator++(int) {CountIterator tmp(*this); operator++(); return tmp;} + CountIterator&operator--(){ --n_; return *this;} + CountIterator operator--(int) {CountIterator tmp(*this); operator++(); return tmp;} + int operator*() const {return n_;} + + const int*operator->() const {return &n_;} + + int operator[](int o)const{return n_ + o;} + CountIterator&operator+=(CountIterator::difference_type o){n_ += o; return *this;} + CountIterator&operator-=(CountIterator::difference_type o){n_ -= o; return *this;} + + int n_; +}; + +inline bool operator==(CountIterator l, CountIterator r){return l.n_ == r.n_;} +inline bool operator!=(CountIterator l, CountIterator r){return l.n_ != r.n_;} +inline bool operator< (CountIterator l, CountIterator r){return l.n_ < r.n_;} +inline bool operator> (CountIterator l, CountIterator r){return l.n_ > r.n_;} +inline bool operator<=(CountIterator l, CountIterator r){return l.n_ <= r.n_;} +inline bool operator>=(CountIterator l, CountIterator r){return l.n_ >= r.n_;} + +inline CountIterator::difference_type operator-(CountIterator l, CountIterator r){return l.n_ - r.n_;} +inline CountIterator operator-(CountIterator l, CountIterator::difference_type r){return {l.n_ - r};} +inline CountIterator operator+(CountIterator l, CountIterator::difference_type r){return {l.n_ + r};} +inline CountIterator operator+(CountIterator::difference_type l, CountIterator r){return {l + r.n_};} + +typedef Range CountRange; + +inline CountRange count_range(int n){assert(n >= 0); return {CountIterator{0}, CountIterator{n}}; } +inline CountRange count_range(int begin, int end){assert(begin <= end);return {CountIterator{begin}, CountIterator{end}};} + +#endif diff --git a/solvers/flow-cutter-pace17/src/filter.h b/solvers/flow-cutter-pace17/src/filter.h new file mode 100644 index 0000000..007fb3d --- /dev/null +++ b/solvers/flow-cutter-pace17/src/filter.h @@ -0,0 +1,73 @@ +#ifndef FILTER_H +#define FILTER_H + +#include "tiny_id_func.h" + +template +int count_true(const Pred&p){ + int sum = 0; + for(int i=0; i +typename std::enable_if< + is_only_id_func::value, + ArrayIDFunc::type> +>::type keep_if(const Pred&p, int new_preimage_count, const IDFunc&f){ + assert(p.preimage_count() == f.preimage_count()); + assert(new_preimage_count == count_true(p)); + + ArrayIDFunc::type>result(new_preimage_count); + + int out = 0; + for(int in=0; in +typename std::enable_if< + is_id_id_func::value, + ArrayIDIDFunc +>::type keep_if(const Pred&p, int new_preimage_count, const IDFunc&f){ + assert(p.preimage_count() == f.preimage_count()); + assert(new_preimage_count == count_true(p)); + + ArrayIDIDFunc result(new_preimage_count, f.image_count()); + + int out = 0; + for(int in=0; in +ArrayIDIDFunc compute_keep_function(const Pred&pred, int new_image_count){ + ArrayIDIDFunc f(pred.preimage_count(), new_image_count); + int next_id = 0; + for(int i=0; i +ArrayIDIDFunc compute_inverse_keep_function(const Pred&pred, int new_image_count){ + ArrayIDIDFunc f(new_image_count, pred.preimage_count()); + int next_id = 0; + for(int i=0; i +#include +#include +#include +#include + +#include "flow_cutter_config.h" + +namespace flow_cutter{ + + template + struct Graph{ + Graph( + Tail tail, + Head head, + BackArc back_arc, + Capacity capacity, + OutArc out_arc + ): + tail(std::move(tail)), + head(std::move(head)), + back_arc(std::move(back_arc)), + capacity(std::move(capacity)), + out_arc(std::move(out_arc)){} + + Tail tail; + Head head; + BackArc back_arc; + Capacity capacity; + OutArc out_arc; + + int node_count()const{ + return tail.image_count(); + } + + int arc_count()const{ + return tail.preimage_count(); + } + }; + + struct TemporaryData{ + TemporaryData(){} + explicit TemporaryData(int node_count): + node_space(node_count){} + ArrayIDFuncnode_space; + }; + + template + Graph + make_graph( + Tail tail, Head head, BackArc back_arc, + Capacity capacity, OutArc out_arc + ){ + return {std::move(tail), std::move(head), std::move(back_arc), std::move(capacity), std::move(out_arc)}; + } + + template + Graph< + Tail, Head, BackArc, + ConstIntIDFunc<1>, + OutArc + > + make_graph( + Tail tail, Head head, + BackArc back_arc, OutArc out_arc + ){ + return { + std::move(tail), std::move(head), std::move(back_arc), + ConstIntIDFunc<1>(tail.preimage_count()), + std::move(out_arc) + }; + } + + class PseudoDepthFirstSearch{ + public: + template + void operator()( + const Graph&graph, TemporaryData&tmp, int source_node, + const WasNodeSeen&was_node_seen, const SeeNode&see_node, + const ShouldFollowArc&should_follow_arc, const OnNewArc&on_new_arc + )const{ + int stack_end = 1; + auto&stack = tmp.node_space; + stack[0] = source_node; + + while(stack_end != 0){ + int x = stack[--stack_end]; + for(auto xy : graph.out_arc(x)){ + on_new_arc(xy); + int y = graph.head(xy); + if(!was_node_seen(y)){ + if(should_follow_arc(xy)){ + if(!see_node(y)) + return; + stack[stack_end++] = y; + } + } + } + } + } + }; + + class BreadthFirstSearch{ + public: + template + void operator()( + const Graph&graph, TemporaryData&tmp, int source_node, + const WasNodeSeen&was_node_seen, const SeeNode&see_node, + const ShouldFollowArc&should_follow_arc, const OnNewArc&on_new_arc + )const{ + int queue_begin = 0, queue_end = 1; + auto&queue = tmp.node_space; + queue[0] = source_node; + while(queue_begin != queue_end){ + int x = queue[queue_begin++]; + for(auto xy : graph.out_arc(x)){ + on_new_arc(xy); + int y = graph.head(xy); + if(!was_node_seen(y)){ + if(should_follow_arc(xy)){ + if(!see_node(y)) + return; + queue[queue_end++] = y; + } + } + } + } + } + }; + + struct UnitFlow{ + UnitFlow(){} + explicit UnitFlow(int preimage_count):flow(preimage_count){} + + void clear(){ + flow.fill(1); + } + + int preimage_count()const{ + return flow.preimage_count(); + } + + template + void increase(const Graph&graph, int a){ + auto f = flow(a); + assert((f == 0 || f == 1) && "Flow is already maximum; can not be increased"); + assert(flow(graph.back_arc(a)) == 2-f && "Back arc has invalid flow"); + ++f; + flow.set(a, f); + flow.set(graph.back_arc(a), 2-f); + } + + template + void decrease(const Graph&graph, int a){ + auto f = flow(a); + assert((f == 1 || f == 2) && "Flow is already minimum; can not be decreased"); + assert(flow(graph.back_arc(a)) == 2-f && "Back arc has invalid flow"); + --f; + flow.set(a, f); + flow.set(graph.back_arc(a), 2-f); + } + + int operator()(int a)const{ + return static_cast(flow(a))-1; + } + + void swap(UnitFlow&o){ + flow.swap(o.flow); + } + + TinyIntIDFunc<2>flow; + }; + + class BasicNodeSet{ + public: + template + explicit BasicNodeSet(const Graph&graph): + node_count_inside_(0), + inside_flag(graph.node_count()), + extra_node(-1){} + + void clear(){ + node_count_inside_ = 0; + inside_flag.fill(false); + } + + bool can_grow()const{ + return extra_node != -1; + } + + template + void grow( + const Graph&graph, + TemporaryData&tmp, + const SearchAlgorithm&search_algo, + const OnNewNode&on_new_node, // on_new_node(x) is called for every node x. If it returns false then the search is stopped, if it returns true it continues + const ShouldFollowArc&should_follow_arc, // is called for a subset of arcs and must say whether the arc sould be followed + const OnNewArc&on_new_arc // on_new_arc(xy) is called for ever arc xy with x in the set + ){ + assert(can_grow()); + + auto see_node = [&](int x){ + assert(!inside_flag(x)); + inside_flag.set(x, true); + ++this->node_count_inside_; + return on_new_node(x); + }; + + auto was_node_seen = [&](int x){ + return inside_flag(x); + }; + + search_algo(graph, tmp, extra_node, was_node_seen, see_node, should_follow_arc, on_new_arc); + extra_node = -1; + } + + template + void set_extra_node(const Graph&graph, int x){ + assert(!inside_flag(x)); + assert(extra_node == -1); + inside_flag.set(x, true); + ++node_count_inside_; + extra_node = x; + } + + bool is_inside(int x) const { + return inside_flag(x); + } + + int node_count_inside() const { + return node_count_inside_; + } + + int max_node_count_inside() const { + return inside_flag.preimage_count(); + } + + private: + int node_count_inside_; + BitIDFunc inside_flag; + int extra_node; + }; + + class ReachableNodeSet; + + class AssimilatedNodeSet{ + friend class ReachableNodeSet; + public: + template + explicit AssimilatedNodeSet(const Graph&graph): + node_set(graph){} + + void clear(){ + node_set.clear(); + front.clear(); + } + + template + void set_extra_node(const Graph&graph, int x){ + node_set.set_extra_node(graph, x); + } + + bool can_grow()const{ + return node_set.can_grow(); + } + + template + void grow( + const Graph&graph, + TemporaryData&tmp, + const SearchAlgorithm&search_algo, + const OnNewNode&on_new_node, // on_new_node(x) is called for every node x. If it returns false then the search is stopped, if it returns true it continues + const ShouldFollowArc&should_follow_arc, // is called for a subset of arcs and must say whether the arc sould be followed + const OnNewArc&on_new_arc, // on_new_arc(xy) is called for ever arc xy with x in the set + const HasFlow&has_flow + ){ + auto my_on_new_arc = [&](int xy){ + if(has_flow(xy)) + front.push_back(xy); + on_new_arc(xy); + }; + + node_set.grow(graph, tmp, search_algo, on_new_node, should_follow_arc, my_on_new_arc); + } + + bool is_inside(int x) const { + return node_set.is_inside(x); + } + + int node_count_inside() const { + return node_set.node_count_inside(); + } + + int max_node_count_inside() const { + return node_set.max_node_count_inside(); + } + + template + void shrink_cut_front(const Graph&graph){ + front.erase( + std::remove_if( + front.begin(), front.end(), + [&](int xy){ return node_set.is_inside(graph.head(xy)); } + ), + front.end() + ); + } + + const std::vector&get_cut_front() const { + return front; + } + + private: + BasicNodeSet node_set; + std::vectorfront; + }; + + class ReachableNodeSet{ + public: + template + explicit ReachableNodeSet(const Graph&graph): + node_set(graph), predecessor(graph.node_count()){} + + void reset(const AssimilatedNodeSet&other){ + node_set = other.node_set; + } + + void clear(){ + node_set.clear(); + } + + template + void set_extra_node(const Graph&graph, int x){ + node_set.set_extra_node(graph, x); + } + + bool can_grow()const{ + return node_set.can_grow(); + } + + template + void grow( + const Graph&graph, + TemporaryData&tmp, + const SearchAlgorithm&search_algo, + const OnNewNode&on_new_node, // on_new_node(x) is called for every node x. If it returns false then the search is stopped, if it returns true it continues + const ShouldFollowArc&should_follow_arc, // is called for a subset of arcs and must say whether the arc sould be followed + const OnNewArc&on_new_arc // on_new_arc(xy) is called for ever arc xy with x in the set + ){ + auto my_should_follow_arc = [&](int xy){ + predecessor[graph.head(xy)] = xy; + return should_follow_arc(xy); + }; + + node_set.grow(graph, tmp, search_algo, on_new_node, my_should_follow_arc, on_new_arc); + } + + bool is_inside(int x) const { + return node_set.is_inside(x); + } + + int node_count_inside() const { + return node_set.node_count_inside(); + } + + int max_node_count_inside() const { + return node_set.max_node_count_inside(); + } + + template + void forall_arcs_in_path_to(const Graph&graph, const IsSource&is_source, int target, const OnNewArc&on_new_arc){ + int x = target; + while(!is_source(x)){ + on_new_arc(predecessor[x]); + x = graph.tail(predecessor[x]); + } + } + + private: + BasicNodeSet node_set; + ArrayIDFuncpredecessor; + }; + + struct SourceTargetPair{ + int source, target; + }; + + struct CutterStateDump{ + BitIDFunc source_assimilated, target_assimilated, source_reachable, target_reachable, flow; + }; + + class BasicCutter{ + public: + template + explicit BasicCutter(const Graph&graph): + assimilated{AssimilatedNodeSet(graph), AssimilatedNodeSet(graph)}, + reachable{ReachableNodeSet(graph), ReachableNodeSet(graph)}, + flow(graph.arc_count()), + cut_available(false) + {} + + template + void init(const Graph&graph, TemporaryData&tmp, const SearchAlgorithm&search_algo, SourceTargetPair p){ + assimilated[source_side].clear(); + reachable[source_side].clear(); + assimilated[target_side].clear(); + reachable[target_side].clear(); + flow.clear(); + + assimilated[source_side].set_extra_node(graph, p.source); + reachable[source_side].set_extra_node(graph, p.source); + assimilated[target_side].set_extra_node(graph, p.target); + reachable[target_side].set_extra_node(graph, p.target); + + grow_reachable_sets(graph, tmp, search_algo, source_side); + grow_assimilated_sets(graph, tmp, search_algo); + + cut_available = true; + check_invariants(graph); + } + + CutterStateDump dump_state()const{ + return { + id_func( + assimilated[source_side].max_node_count_inside(), + [&](int x){ + return assimilated[source_side].is_inside(x); + } + ), + id_func( + assimilated[target_side].max_node_count_inside(), + [&](int x){ + return assimilated[target_side].is_inside(x); + } + ), + id_func( + assimilated[source_side].max_node_count_inside(), + [&](int x){ + return reachable[source_side].is_inside(x); + } + ), + id_func( + assimilated[target_side].max_node_count_inside(), + [&](int x){ + return reachable[target_side].is_inside(x); + } + ), + id_func( + flow.preimage_count(), + [&](int xy){ + return flow(xy) != 0; + } + ) + }; + } + + //! Returns true if a new cut was found. Returns false if no cut was found. False implies that no cut + //! will be found in the future. Repeatly calling this function after it returned false does not do + //! anything. + template + bool advance(const Graph&graph, TemporaryData&tmp, const SearchAlgorithm&search_algo, const ScorePierceNode&score_pierce_node){ + assert(cut_available); + + check_invariants(graph); + int side = get_current_cut_side(); + if(assimilated[side].node_count_inside() >= graph.node_count()/2){ + cut_available = false; + return false; + } + + int pierce_node = select_pierce_node(graph, side, score_pierce_node); + + if(pierce_node == -1){ + cut_available = false; + return false; + } + + assert(!assimilated[1-side].is_inside(pierce_node)); + + assimilated[side].set_extra_node(graph, pierce_node); + reachable[side].set_extra_node(graph, pierce_node); + + grow_reachable_sets(graph, tmp, search_algo, side); + grow_assimilated_sets(graph, tmp, search_algo); + check_invariants(graph); + cut_available = true; + return true; + } + + bool is_cut_available()const{ + return cut_available; + } + + template + bool does_next_advance_increase_cut(const Graph&graph, const ScorePierceNode&score_pierce_node){ + int side = get_current_cut_side(); + + if(assimilated[side].node_count_inside() >= graph.node_count()/2){ + return true; + } + + + int pierce_node = select_pierce_node(graph, side, score_pierce_node); + + if(pierce_node == -1) + return true; + else if(reachable[1-side].is_inside(pierce_node)) + return true; + else + return false; + } + + bool is_on_smaller_side(int x)const{ + return assimilated[get_current_cut_side()].is_inside(x); + } + + static const int source_side = 0; + static const int target_side = 1; + + int get_current_cut_side()const{ + if( + reachable[source_side].node_count_inside() == assimilated[source_side].node_count_inside() && ( + reachable[target_side].node_count_inside() != assimilated[target_side].node_count_inside() || + assimilated[source_side].node_count_inside() <= assimilated[target_side].node_count_inside() + ) + ) + return source_side; + else + return target_side; + } + + int get_current_smaller_cut_side_size()const{ + return assimilated[get_current_cut_side()].node_count_inside(); + } + + const std::vector&get_current_cut()const{ + return assimilated[get_current_cut_side()].get_cut_front(); + } + + int get_assimilated_node_count()const{ + return assimilated[source_side].node_count_inside() + assimilated[target_side].node_count_inside(); + } + + private: + template + int select_pierce_node(const Graph&graph, int side, const ScorePierceNode&score_pierce_node){ + + int pierce_node = -1; + int max_score = std::numeric_limits::min(); + for(auto xy : assimilated[side].get_cut_front()){ + int y = graph.head(xy); + if(!assimilated[1-side].is_inside(y)){ + int score = score_pierce_node(y, side, reachable[1-side].is_inside(y)); + + if(score > max_score){ + max_score = score; + pierce_node = y; + } + } + } + + return pierce_node; + } + + template + bool is_saturated(const Graph&graph, int direction, int xy){ + if(direction == target_side) + xy = graph.back_arc(xy); + return graph.capacity(xy) == flow(xy); + } + + + template + void grow_reachable_sets(const Graph&graph, TemporaryData&tmp, const SearchAlgorithm&search_algo, int pierced_side){ + + int my_source_side = pierced_side; + int my_target_side = 1-pierced_side; + + assert(reachable[pierced_side].can_grow()); + + auto is_forward_saturated = [&,this](int xy){ + return this->is_saturated(graph, my_source_side, xy); + }; + + auto is_backward_saturated = [&,this](int xy){ + return this->is_saturated(graph, my_target_side, xy); + }; + + auto is_source = [&](int x){ + return assimilated[my_source_side].is_inside(x); + }; + + auto is_target = [&](int x){ + return assimilated[my_target_side].is_inside(x); + }; + + auto increase_flow = [&](int xy){ + if(pierced_side == source_side) + flow.increase(graph, xy); + else + flow.decrease(graph, xy); + }; + + bool was_flow_augmented = false; + + int target_hit; + do{ + target_hit = -1; + auto on_new_node = [&](int x){ + if(is_target(x)){ + target_hit = x; + return false; + } else + return true; + }; + auto should_follow_arc = [&](int xy){ return !is_forward_saturated(xy); }; + auto on_new_arc = [](int xy){}; + reachable[my_source_side].grow(graph, tmp, search_algo, on_new_node, should_follow_arc, on_new_arc); + + if(target_hit != -1){ + check_flow_conservation(graph); + reachable[my_source_side].forall_arcs_in_path_to(graph, is_source, target_hit, increase_flow); + check_flow_conservation(graph); + reachable[my_source_side].reset(assimilated[my_source_side]); + + was_flow_augmented = true; + check_flow_conservation(graph); + } + }while(target_hit != -1); + + if(was_flow_augmented){ + reachable[my_target_side].reset(assimilated[my_target_side]); + auto on_new_node = [&](int x){return true;}; + auto should_follow_arc = [&](int xy){ return !is_backward_saturated(xy); }; + auto on_new_arc = [](int xy){}; + reachable[my_target_side].grow(graph, tmp, search_algo, on_new_node, should_follow_arc, on_new_arc); + } + + } + + template + void grow_assimilated_sets(const Graph&graph, TemporaryData&tmp, const SearchAlgorithm&search_algo){ + auto is_forward_saturated = [&,this](int xy){ + return this->is_saturated(graph, source_side, xy); + }; + + auto is_backward_saturated = [&,this](int xy){ + return this->is_saturated(graph, target_side, xy); + }; + + if(reachable[source_side].node_count_inside() <= reachable[target_side].node_count_inside()){ + auto on_new_node = [&](int x){return true;}; + auto should_follow_arc = [&](int xy){ return !is_forward_saturated(xy); }; + auto on_new_arc = [](int xy){}; + auto has_flow = [&](int xy){ return flow(xy) != 0; }; + assimilated[source_side].grow(graph, tmp, search_algo, on_new_node, should_follow_arc, on_new_arc, has_flow); + assimilated[source_side].shrink_cut_front(graph); + }else{ + auto on_new_node = [&](int x){return true;}; + auto should_follow_arc = [&](int xy){ return !is_backward_saturated(xy); }; + auto on_new_arc = [](int xy){}; + auto has_flow = [&](int xy){ return flow(xy) != 0; }; + assimilated[target_side].grow(graph, tmp, search_algo, on_new_node, should_follow_arc, on_new_arc, has_flow); + assimilated[target_side].shrink_cut_front(graph); + } + } + + template + void check_flow_conservation(const Graph&graph){ + #ifndef NDEBUG + for(int x=0; x + void check_invariants(const Graph&graph){ + #ifndef NDEBUG + for(int side = 0; side < 2; ++side) + assert(assimilated[side].node_count_inside() > 0 && "Each side must contain at least one node"); + + for(int x=0; x + static void compute_hop_distance_from(const Graph&graph, TemporaryData&tmp, int source, ArrayIDFunc&dist){ + dist.fill(std::numeric_limits::max()); + dist[source] = 0; + + auto was_node_seen = [&](int x){return false;}; + auto see_node = [](int x){ return true; }; + auto should_follow_arc = [&](int xy){ + if(dist(graph.tail(xy)) < dist(graph.head(xy)) - 1){ + dist[graph.head(xy)] = dist(graph.tail(xy))+1; + return true; + }else{ + return false; + } + }; + auto on_new_arc = [&](int xy){}; + BreadthFirstSearch()(graph, tmp, source, was_node_seen, see_node, should_follow_arc, on_new_arc); + } + public: + template + DistanceAwareCutter(const Graph&graph): + cutter(graph), + node_dist{ArrayIDFunc{graph.node_count()}, ArrayIDFunc{graph.node_count()}}{} + + template + void init(const Graph&graph, TemporaryData&tmp, const SearchAlgorithm&search_algo, DistanceType dist_type, SourceTargetPair p, int random_seed){ + cutter.init(graph, tmp, search_algo, p); + + rng.seed(random_seed); + + switch(dist_type){ + case DistanceType::hop_distance: + compute_hop_distance_from(graph, tmp, p.source, node_dist[source_side]); + compute_hop_distance_from(graph, tmp, p.target, node_dist[target_side]); + break; + case DistanceType::no_distance: + break; + default: + assert(false); + break; + } + } + + CutterStateDump dump_state()const{ + return cutter.dump_state(); + } + + template + bool advance(const Graph&graph, TemporaryData&tmp, const SearchAlgorithm&search_algo, const ScorePierceNode&score_pierce_node){ + auto my_score_pierce_node = [&](int x, int side, bool causes_augmenting_path){ + return score_pierce_node(x, side, causes_augmenting_path, node_dist[side](x), node_dist[1-side](x)); + }; + return cutter.advance(graph, tmp, search_algo, my_score_pierce_node); + } + + bool is_cut_available()const{ + return cutter.is_cut_available(); + } + + template + bool does_next_advance_increase_cut(const Graph&graph, const ScorePierceNode&score_pierce_node){ + auto my_score_pierce_node = [&](int x, int side, bool causes_augmenting_path){ + return score_pierce_node(x, side, causes_augmenting_path, node_dist[side](x), node_dist[1-side](x)); + }; + return cutter.does_next_advance_increase_cut(graph, my_score_pierce_node); + } + + static const int source_side = BasicCutter::source_side; + static const int target_side = BasicCutter::target_side; + + int get_current_cut_side()const{ + return cutter.get_current_cut_side(); + } + + int get_current_smaller_cut_side_size()const{ + return cutter.get_current_smaller_cut_side_size(); + } + + const std::vector&get_current_cut()const{ + return cutter.get_current_cut(); + } + + int get_assimilated_node_count()const{ + return cutter.get_assimilated_node_count(); + } + + bool is_on_smaller_side(int x)const{ + return cutter.is_on_smaller_side(x); + } + + bool is_empty()const{ + return node_dist[0].preimage_count() == 0; + } + private: + BasicCutter cutter; + ArrayIDFuncnode_dist[2]; + std::mt19937 rng; + }; + + class MultiCutter{ + public: + MultiCutter(){} + + template + void init( + const Graph&graph, TemporaryData&tmp, + const SearchAlgorithm&search_algo, const ScorePierceNode&score_pierce_node, DistanceType dist_type, + const std::vector&p, int random_seed, bool should_skip_non_maximum_sides = true + ){ + while(cutter_list.size() > p.size()) + cutter_list.pop_back(); // can not use resize because that requires default constructor... + while(cutter_list.size() < p.size()) + cutter_list.emplace_back(graph); + + for(int i=0; i<(int)p.size(); ++i){ + auto&x = cutter_list[i]; + auto my_score_pierce_node = [&](int x, int side, bool causes_augmenting_path, int source_dist, int target_dist){ + return score_pierce_node(x, side, causes_augmenting_path, source_dist, target_dist, i); + }; + + x.init(graph, tmp, search_algo, dist_type, p[i], random_seed+1+i); + if(should_skip_non_maximum_sides) + while(!x.does_next_advance_increase_cut(graph, my_score_pierce_node)) + x.advance(graph, tmp, search_algo, my_score_pierce_node); + } + + int best_cutter_id = -1; + int best_cut_size = std::numeric_limits::max(); + int best_cutter_weight = 0; + + for(int i=0; i<(int)p.size(); ++i){ + auto&x = cutter_list[i]; + if( + (int)x.get_current_cut().size() < best_cut_size + || ( + (int)x.get_current_cut().size() == best_cut_size && + x.get_current_smaller_cut_side_size() > best_cutter_weight + ) + ){ + best_cutter_id = i; + best_cut_size = x.get_current_cut().size(); + best_cutter_weight = x.get_current_smaller_cut_side_size(); + } + } + + current_cutter_id = best_cutter_id; + current_smaller_side_size = cutter_list[current_cutter_id].get_current_smaller_cut_side_size(); + } + + CutterStateDump dump_state()const{ + if(cutter_list.size() != 1) + throw std::runtime_error("Can only dump the cutter state if a single instance is run"); + return cutter_list[0].dump_state(); + } + + template + bool advance(const Graph&graph, TemporaryData&tmp, const SearchAlgorithm&search_algo, const ScorePierceNode&score_pierce_node, bool should_skip_non_maximum_sides = true){ + if(graph.node_count() /2 == get_current_smaller_cut_side_size()) + return false; + + int current_cut_size = cutter_list[current_cutter_id].get_current_cut().size(); + for(;;){ + for(int i=0; i<(int)cutter_list.size(); ++i){ + auto x = std::move(cutter_list[i]); + auto my_score_pierce_node = [&](int x, int side, bool causes_augmenting_path, int source_dist, int target_dist){ + return score_pierce_node(x, side, causes_augmenting_path, source_dist, target_dist, i); + }; + if(x.is_cut_available()){ + if((int)x.get_current_cut().size() == current_cut_size){ + assert(x.does_next_advance_increase_cut(graph, my_score_pierce_node)); + if(x.advance(graph, tmp, search_algo, my_score_pierce_node)){ + assert((int)x.get_current_cut().size() > current_cut_size); + while(!x.does_next_advance_increase_cut(graph, my_score_pierce_node)){ + if(!x.advance(graph, tmp, search_algo, my_score_pierce_node)) + break; + if(!should_skip_non_maximum_sides) + break; + } + } + } + } + + cutter_list[i] = std::move(x); + } + + int next_cut_size = std::numeric_limits::max(); + for(auto&x:cutter_list) + if(x.is_cut_available()) + min_to(next_cut_size, (int)x.get_current_cut().size()); + + if(next_cut_size == std::numeric_limits::max()) + return false; + + + int best_cutter_weight = 0; + int best_cutter_id = -1; + for(int i=0; i<(int)cutter_list.size(); ++i){ + if(cutter_list[i].is_cut_available()){ + if( + (int)cutter_list[i].get_current_cut().size() == next_cut_size && + cutter_list[i].get_current_smaller_cut_side_size() > best_cutter_weight + ){ + best_cutter_id = i; + best_cutter_weight = cutter_list[i].get_current_smaller_cut_side_size(); + } + } + } + + assert(best_cutter_id != -1); + + current_cut_size = next_cut_size; + + if(best_cutter_weight <= current_smaller_side_size) + continue; + + current_cutter_id = best_cutter_id; + current_smaller_side_size = cutter_list[current_cutter_id].get_current_smaller_cut_side_size(); + return true; + } + } + + int get_current_smaller_cut_side_size()const{ + return current_smaller_side_size; + } + + bool is_on_smaller_side(int x)const{ + return cutter_list[current_cutter_id].is_on_smaller_side(x); + } + + const std::vector&get_current_cut()const{ + return cutter_list[current_cutter_id].get_current_cut(); + } + + int get_current_cutter_id()const{ + return current_cutter_id; + } + + private: + std::vectorcutter_list; + int current_smaller_side_size; + int current_cutter_id; + }; + + struct PierceNodeScore{ + static constexpr unsigned hash_modulo = ((1u<<31u)-1u); + unsigned hash_factor, hash_offset; + + PierceNodeScore(Config config): config(config){ + std::mt19937 gen; + gen.seed(config.random_seed); + gen(); + hash_factor = gen() % hash_modulo; + hash_offset = gen() % hash_modulo; + } + + Config config; + + + int operator()(int x, int side, bool causes_augmenting_path, int source_dist, int target_dist, int cutter_id)const{ + + auto random_number = [&]{ + if(side == BasicCutter::source_side) + return (hash_factor * (unsigned)(x<<1) + hash_offset) % hash_modulo; + else + return (hash_factor * ((unsigned)(x<<1)+1) + hash_offset) % hash_modulo; + }; + + int score; + switch(config.pierce_rating){ + case Config::PierceRating::max_target_minus_source_hop_dist: + score = target_dist - source_dist; + break; + case Config::PierceRating::max_target_hop_dist: + score = target_dist; + break; + case Config::PierceRating::min_source_hop_dist: + score = -source_dist; + break; + case Config::PierceRating::oldest: + score = 0; + break; + case Config::PierceRating::random: + score = random_number(); + break; + + default: + assert(false); + score = 0; + } + switch(config.avoid_augmenting_path){ + case Config::AvoidAugmentingPath::avoid_and_pick_best: + if(causes_augmenting_path) + score -= 1000000000; + break; + case Config::AvoidAugmentingPath::do_not_avoid: + break; + case Config::AvoidAugmentingPath::avoid_and_pick_oldest: + if(causes_augmenting_path) + score = -1000000000; + break; + case Config::AvoidAugmentingPath::avoid_and_pick_random: + if(causes_augmenting_path) + score = random_number() - 1000000000; + break; + default: + assert(false); + score = 0; + } + return score; + } + }; + + template + class SimpleCutter{ + public: + SimpleCutter(const Graph&graph, Config config): + graph(graph), tmp(graph.node_count()), config(config){ + } + + void init(const std::vector&p, int random_seed){ + DistanceType dist_type; + + if( + config.pierce_rating == Config::PierceRating::min_source_hop_dist || + config.pierce_rating == Config::PierceRating::max_target_hop_dist || + config.pierce_rating == Config::PierceRating::max_target_minus_source_hop_dist + ) + dist_type = DistanceType::hop_distance; + else + dist_type = DistanceType::no_distance; + + switch(config.graph_search_algorithm){ + case Config::GraphSearchAlgorithm::pseudo_depth_first_search: + cutter.init(graph, tmp, PseudoDepthFirstSearch(), PierceNodeScore(config), dist_type, p, random_seed, config.skip_non_maximum_sides == Config::SkipNonMaximumSides::skip); + break; + + case Config::GraphSearchAlgorithm::breadth_first_search: + cutter.init(graph, tmp, BreadthFirstSearch(), PierceNodeScore(config), dist_type, p, random_seed, config.skip_non_maximum_sides == Config::SkipNonMaximumSides::skip); + break; + + case Config::GraphSearchAlgorithm::depth_first_search: + throw std::runtime_error("depth first search is not yet implemented"); + default: + assert(false); + + } + } + + bool advance(){ + + switch(config.graph_search_algorithm){ + case Config::GraphSearchAlgorithm::pseudo_depth_first_search: + return cutter.advance(graph, tmp, PseudoDepthFirstSearch(), PierceNodeScore(config), config.skip_non_maximum_sides == Config::SkipNonMaximumSides::skip); + + case Config::GraphSearchAlgorithm::breadth_first_search: + return cutter.advance(graph, tmp, BreadthFirstSearch(), PierceNodeScore(config), config.skip_non_maximum_sides == Config::SkipNonMaximumSides::skip); + + case Config::GraphSearchAlgorithm::depth_first_search: + throw std::runtime_error("depth first search is not yet implemented"); + default: + assert(false); + return false; + } + } + + CutterStateDump dump_state()const{ + return cutter.dump_state(); + } + + int get_current_smaller_cut_side_size()const{ + return cutter.get_current_smaller_cut_side_size(); + } + + bool is_on_smaller_side(int x)const{ + return cutter.is_on_smaller_side(x); + } + + const std::vector&get_current_cut()const{ + return cutter.get_current_cut(); + } + + int get_current_cutter_id()const{ + return cutter.get_current_cutter_id(); + } + + private: + const Graph&graph; + TemporaryData tmp; + MultiCutter cutter; + Config config; + }; + + template + SimpleCutter make_simple_cutter(const Graph&graph, Config config){ + return SimpleCutter(graph, config); + } + + std::vectorselect_random_source_target_pairs(int node_count, int cutter_count, int seed){ + std::vectorp(cutter_count); + std::mt19937 rng(seed); + for(auto&x:p){ + do{ + // We do not use std::uniform_distribution because it produces difference results for different compilers + x.source = rng()%node_count; + x.target = rng()%node_count; + }while(x.source == x.target); + } + return p; + } +} + +#endif + diff --git a/solvers/flow-cutter-pace17/src/flow_cutter_config.h b/solvers/flow-cutter-pace17/src/flow_cutter_config.h new file mode 100644 index 0000000..68e56cd --- /dev/null +++ b/solvers/flow-cutter-pace17/src/flow_cutter_config.h @@ -0,0 +1,188 @@ + +#ifndef FLOW_CUTTER_CONFIG_H +#define FLOW_CUTTER_CONFIG_H +#include +#include +#include +#include + +namespace flow_cutter{ + struct Config{ + int cutter_count; + int random_seed; + int max_cut_size; + float min_small_side_size; + + enum class SkipNonMaximumSides{ + skip, + no_skip + }; + SkipNonMaximumSides skip_non_maximum_sides; + + enum class SeparatorSelection{ + node_min_expansion, + edge_min_expansion, + node_first, + edge_first + }; + SeparatorSelection separator_selection; + + enum class GraphSearchAlgorithm{ + pseudo_depth_first_search, + breadth_first_search, + depth_first_search + }; + GraphSearchAlgorithm graph_search_algorithm; + + enum class AvoidAugmentingPath{ + avoid_and_pick_best, + do_not_avoid, + avoid_and_pick_oldest, + avoid_and_pick_random + }; + AvoidAugmentingPath avoid_augmenting_path; + + enum class PierceRating{ + max_target_minus_source_hop_dist, + min_source_hop_dist, + max_target_hop_dist, + random, + oldest + }; + PierceRating pierce_rating; + + Config(): + cutter_count(3), + random_seed(5489), + max_cut_size(1000), + min_small_side_size(0.2), + skip_non_maximum_sides(SkipNonMaximumSides::skip), + separator_selection(SeparatorSelection::node_min_expansion), + graph_search_algorithm(GraphSearchAlgorithm::pseudo_depth_first_search), + avoid_augmenting_path(AvoidAugmentingPath::avoid_and_pick_best), + pierce_rating(PierceRating::max_target_minus_source_hop_dist){} + + void set(const std::string&var, const std::string&val){ + int val_id = -1; + if(var == "SkipNonMaximumSides" || var == "skip_non_maximum_sides"){ + if(val == "skip" || val_id == static_cast(SkipNonMaximumSides::skip)) + skip_non_maximum_sides = SkipNonMaximumSides::skip; + else if(val == "no_skip" || val_id == static_cast(SkipNonMaximumSides::no_skip)) + skip_non_maximum_sides = SkipNonMaximumSides::no_skip; + else throw std::runtime_error("Unknown config value "+val+" for variable SkipNonMaximumSides; valid are skip, no_skip"); + }else if(var == "SeparatorSelection" || var == "separator_selection"){ + if(val == "node_min_expansion" || val_id == static_cast(SeparatorSelection::node_min_expansion)) + separator_selection = SeparatorSelection::node_min_expansion; + else if(val == "edge_min_expansion" || val_id == static_cast(SeparatorSelection::edge_min_expansion)) + separator_selection = SeparatorSelection::edge_min_expansion; + else if(val == "node_first" || val_id == static_cast(SeparatorSelection::node_first)) + separator_selection = SeparatorSelection::node_first; + else if(val == "edge_first" || val_id == static_cast(SeparatorSelection::edge_first)) + separator_selection = SeparatorSelection::edge_first; + else throw std::runtime_error("Unknown config value "+val+" for variable SeparatorSelection; valid are node_min_expansion, edge_min_expansion, node_first, edge_first"); + }else if(var == "GraphSearchAlgorithm" || var == "graph_search_algorithm"){ + if(val == "pseudo_depth_first_search" || val_id == static_cast(GraphSearchAlgorithm::pseudo_depth_first_search)) + graph_search_algorithm = GraphSearchAlgorithm::pseudo_depth_first_search; + else if(val == "breadth_first_search" || val_id == static_cast(GraphSearchAlgorithm::breadth_first_search)) + graph_search_algorithm = GraphSearchAlgorithm::breadth_first_search; + else if(val == "depth_first_search" || val_id == static_cast(GraphSearchAlgorithm::depth_first_search)) + graph_search_algorithm = GraphSearchAlgorithm::depth_first_search; + else throw std::runtime_error("Unknown config value "+val+" for variable GraphSearchAlgorithm; valid are pseudo_depth_first_search, breadth_first_search, depth_first_search"); + }else if(var == "AvoidAugmentingPath" || var == "avoid_augmenting_path"){ + if(val == "avoid_and_pick_best" || val_id == static_cast(AvoidAugmentingPath::avoid_and_pick_best)) + avoid_augmenting_path = AvoidAugmentingPath::avoid_and_pick_best; + else if(val == "do_not_avoid" || val_id == static_cast(AvoidAugmentingPath::do_not_avoid)) + avoid_augmenting_path = AvoidAugmentingPath::do_not_avoid; + else if(val == "avoid_and_pick_oldest" || val_id == static_cast(AvoidAugmentingPath::avoid_and_pick_oldest)) + avoid_augmenting_path = AvoidAugmentingPath::avoid_and_pick_oldest; + else if(val == "avoid_and_pick_random" || val_id == static_cast(AvoidAugmentingPath::avoid_and_pick_random)) + avoid_augmenting_path = AvoidAugmentingPath::avoid_and_pick_random; + else throw std::runtime_error("Unknown config value "+val+" for variable AvoidAugmentingPath; valid are avoid_and_pick_best, do_not_avoid, avoid_and_pick_oldest, avoid_and_pick_random"); + }else if(var == "PierceRating" || var == "pierce_rating"){ + if(val == "max_target_minus_source_hop_dist" || val_id == static_cast(PierceRating::max_target_minus_source_hop_dist)) + pierce_rating = PierceRating::max_target_minus_source_hop_dist; + else if(val == "min_source_hop_dist" || val_id == static_cast(PierceRating::min_source_hop_dist)) + pierce_rating = PierceRating::min_source_hop_dist; + else if(val == "max_target_hop_dist" || val_id == static_cast(PierceRating::max_target_hop_dist)) + pierce_rating = PierceRating::max_target_hop_dist; + else if(val == "random" || val_id == static_cast(PierceRating::random)) + pierce_rating = PierceRating::random; + else if(val == "oldest" || val_id == static_cast(PierceRating::oldest)) + pierce_rating = PierceRating::oldest; + else throw std::runtime_error("Unknown config value "+val+" for variable PierceRating; valid are max_target_minus_source_hop_dist, min_source_hop_dist, max_target_hop_dist, random, oldest"); + }else if(var == "cutter_count"){ + int x = std::stoi(val); + if(!(x>0)) + throw std::runtime_error("Value for \"cutter_count\" must fullfill \"x>0\""); + cutter_count = x; + }else if(var == "random_seed"){ + random_seed = std::stoi(val); + }else if(var == "max_cut_size"){ + int x = std::stoi(val); + if(!(x>=1)) + throw std::runtime_error("Value for \"max_cut_size\" must fullfill \"x>=1\""); + max_cut_size = x; + }else if(var == "min_small_side_size"){ + float x = std::stof(val); + if(!(0.5>=x&&x>=0.0)) + throw std::runtime_error("Value for \"min_small_side_size\" must fullfill \"0.5>=x&&x>=0.0\""); + min_small_side_size = x; + }else throw std::runtime_error("Unknown config variable "+var+"; valid are SkipNonMaximumSides, SeparatorSelection, GraphSearchAlgorithm, AvoidAugmentingPath, PierceRating, cutter_count, random_seed, max_cut_size, min_small_side_size"); + } + std::string get(const std::string&var)const{ + if(var == "SkipNonMaximumSides" || var == "skip_non_maximum_sides"){ + if(skip_non_maximum_sides == SkipNonMaximumSides::skip) return "skip"; + else if(skip_non_maximum_sides == SkipNonMaximumSides::no_skip) return "no_skip"; + else {assert(false); return "";} + }else if(var == "SeparatorSelection" || var == "separator_selection"){ + if(separator_selection == SeparatorSelection::node_min_expansion) return "node_min_expansion"; + else if(separator_selection == SeparatorSelection::edge_min_expansion) return "edge_min_expansion"; + else if(separator_selection == SeparatorSelection::node_first) return "node_first"; + else if(separator_selection == SeparatorSelection::edge_first) return "edge_first"; + else {assert(false); return "";} + }else if(var == "GraphSearchAlgorithm" || var == "graph_search_algorithm"){ + if(graph_search_algorithm == GraphSearchAlgorithm::pseudo_depth_first_search) return "pseudo_depth_first_search"; + else if(graph_search_algorithm == GraphSearchAlgorithm::breadth_first_search) return "breadth_first_search"; + else if(graph_search_algorithm == GraphSearchAlgorithm::depth_first_search) return "depth_first_search"; + else {assert(false); return "";} + }else if(var == "AvoidAugmentingPath" || var == "avoid_augmenting_path"){ + if(avoid_augmenting_path == AvoidAugmentingPath::avoid_and_pick_best) return "avoid_and_pick_best"; + else if(avoid_augmenting_path == AvoidAugmentingPath::do_not_avoid) return "do_not_avoid"; + else if(avoid_augmenting_path == AvoidAugmentingPath::avoid_and_pick_oldest) return "avoid_and_pick_oldest"; + else if(avoid_augmenting_path == AvoidAugmentingPath::avoid_and_pick_random) return "avoid_and_pick_random"; + else {assert(false); return "";} + }else if(var == "PierceRating" || var == "pierce_rating"){ + if(pierce_rating == PierceRating::max_target_minus_source_hop_dist) return "max_target_minus_source_hop_dist"; + else if(pierce_rating == PierceRating::min_source_hop_dist) return "min_source_hop_dist"; + else if(pierce_rating == PierceRating::max_target_hop_dist) return "max_target_hop_dist"; + else if(pierce_rating == PierceRating::random) return "random"; + else if(pierce_rating == PierceRating::oldest) return "oldest"; + else {assert(false); return "";} + }else if(var == "cutter_count"){ + return std::to_string(cutter_count); + }else if(var == "random_seed"){ + return std::to_string(random_seed); + }else if(var == "max_cut_size"){ + return std::to_string(max_cut_size); + }else if(var == "min_small_side_size"){ + return std::to_string(min_small_side_size); + }else throw std::runtime_error("Unknown config variable "+var+"; valid are SkipNonMaximumSides,SeparatorSelection,GraphSearchAlgorithm,AvoidAugmentingPath,PierceRating, cutter_count, random_seed, max_cut_size, min_small_side_size"); + } + std::string get_config()const{ + std::ostringstream out; + out + << std::setw(30) << "SkipNonMaximumSides" << " : " << get("SkipNonMaximumSides") << '\n' + << std::setw(30) << "SeparatorSelection" << " : " << get("SeparatorSelection") << '\n' + << std::setw(30) << "GraphSearchAlgorithm" << " : " << get("GraphSearchAlgorithm") << '\n' + << std::setw(30) << "AvoidAugmentingPath" << " : " << get("AvoidAugmentingPath") << '\n' + << std::setw(30) << "PierceRating" << " : " << get("PierceRating") << '\n' + << std::setw(30) << "cutter_count" << " : " << get("cutter_count") << '\n' + << std::setw(30) << "random_seed" << " : " << get("random_seed") << '\n' + << std::setw(30) << "max_cut_size" << " : " << get("max_cut_size") << '\n' + << std::setw(30) << "min_small_side_size" << " : " << get("min_small_side_size") << '\n'; + return out.str(); + } + + }; +} +#endif diff --git a/solvers/flow-cutter-pace17/src/greedy_order.cpp b/solvers/flow-cutter-pace17/src/greedy_order.cpp new file mode 100644 index 0000000..87aeadb --- /dev/null +++ b/solvers/flow-cutter-pace17/src/greedy_order.cpp @@ -0,0 +1,184 @@ +#include "greedy_order.h" +#include "array_id_func.h" +#include "tiny_id_func.h" +#include "permutation.h" +#include "heap.h" +#include + +ArrayIDFunc> build_dyn_array(const ArrayIDIDFunc&tail, const ArrayIDIDFunc&head){ + const int node_count = tail.image_count(); + const int arc_count = tail.preimage_count(); + + ArrayIDFunc> neighbors(node_count); + + for(int i=0; i +struct NullAssign{ +public: + NullAssign&operator=(const T&){ + return *this; + } +}; + +template +struct CountOutputIterator{ + typedef T value_type; + typedef int difference_type; + typedef T*pointer; + typedef T&reference; + typedef std::output_iterator_tag iterator_category; + + NullAssign operator*()const{ + return {}; + } + + CountOutputIterator(int&n): + n(&n){}; + + CountOutputIterator&operator++(){ + ++*n; + return *this; + } + + CountOutputIterator&operator++(int){ + ++*n; + return *this; + } + + int*n; +}; + + +template +Iter3 set_union_and_remove_element( + Iter1 a, Iter1 a_end, + Iter2 b, Iter2 b_end, + Iter3 out, + const T&remove_element1, const T&remove_element2 +){ + while(a != a_end && b != b_end){ + if(*a < *b){ + if(*a != remove_element1 && *a != remove_element2) + *out++ = *a; + ++a; + }else if(*a > *b){ + if(*b != remove_element1 && *b != remove_element2) + *out++ = *b; + ++b; + }else if(*a == *b){ + if(*a != remove_element1 && *a != remove_element2) + *out++ = *a; + ++b; + ++a; + } + } + + while(a != a_end){ + if(*a != remove_element1 && *a != remove_element2) + *out++ = *a; + ++a; + } + + while(b != b_end){ + if(*b != remove_element1 && *b != remove_element2) + *out++ = *b; + ++b; + } + + return out; +} + +std::vector contract_node(ArrayIDFunc>&graph, int node){ + std::vectortmp; + for(int x:graph(node)){ + tmp.clear(); + set_union_and_remove_element( + graph(node).begin(), graph(node).end(), + graph(x).begin(), graph(x).end(), + std::back_inserter(tmp), + node, + x + ); + graph[x].swap(tmp); + } + + return std::move(graph[node]); +} + +int compute_number_of_shortcuts_added_if_contracted(const ArrayIDFunc>&graph, int node){ + int added = 0; + for(int x:graph(node)){ + std::set_difference( + graph(node).begin(), graph(node).end(), + graph(x).begin(), graph(x).end(), + CountOutputIterator(added) + ); + --added; + } + + added /= 2; + + return added; +} + + +ArrayIDIDFunc compute_greedy_min_degree_order(const ArrayIDIDFunc&tail, const ArrayIDIDFunc&head){ + const int node_count = tail.image_count(); + + auto g = build_dyn_array(tail, head); + + min_id_heap q(node_count); + + for(int x=0; x q(node_count); + + for(int x=0; x +#include +#include +#include +#include + +//#define INTERNAL_HEAP_CHECKS + +template> +class kway_min_id_heap{ +public: + typedef keyT key_type; + typedef key_orderT key_order_type; + + explicit kway_min_id_heap(int id_count, key_order_type order = key_order_type()): + heap_end(0), heap(id_count), id_pos(id_count, -1), order(std::move(order)), contained_flags(id_count, false){ + check_id_invariants(); + check_order_invariants(); + } + + explicit kway_min_id_heap(key_order_type order = key_order_type()): + heap_end(0), order(std::move(order)){ + check_id_invariants(); + check_order_invariants(); + } + + //! Takes a range of std::pair elements (or any other struct with first and second) + //! and builds a heap containing these elements. + template + void fill(const Range&range){ + clear(); + + for(auto r:range){ + assert(0 <= r.first && r.first < (int)id_pos.size() && "range must contain valid id"); + + heap[heap_end].id = r.first; + heap[heap_end].key = r.second; + + id_pos[r.first] = heap_end; + contained_flags[r.first] = true; + ++heap_end; + } + + rebuild_heap(); + + check_id_invariants(); + check_order_invariants(); + } + + + void clear(){ + //heap_end = 0; + //std::fill(contained_flags.begin(), contained_flags.end(), false); + + for(int i=0; i=0; --i) + move_down(i); + } + + void move_up(int pos){ + if(pos != 0){ + key_type key = std::move(heap[pos].key); + int id = heap[pos].id; + + int parent_pos = parent(pos); + while(order(key, heap[parent_pos].key)){ + heap[pos].id = heap[parent_pos].id; + heap[pos].key = std::move(heap[parent_pos].key); + id_pos[heap[parent_pos].id] = pos; + + pos = parent_pos; + if(pos == 0) + break; + parent_pos = parent(pos); + } + + heap[pos].id = id; + heap[pos].key = std::move(key); + id_pos[id] = pos; + } + } + + void move_down(int pos){ + key_type key = std::move(heap[pos].key); + int id = heap[pos].id; + + for(;;){ + int begin = std::min(heap_end, children_begin(pos)); + int end = std::min(heap_end, children_end(pos)); + + if(begin == end) + break; + + int min_child_pos = begin; + for(int i=begin+1; iheap; + std::vectorid_pos; + key_order_type order; + std::vector contained_flags; + + void check_id_invariants()const{ + #ifdef INTERNAL_HEAP_CHECKS + for(int i=0; i> +class kway_max_id_heap{ +public: + typedef keyT key_type; + typedef key_orderT key_order_type; + + explicit kway_max_id_heap(int id_count, key_order_type order = key_order_type()) + :heap(id_count, inverted_order(std::move(order))){} + + explicit kway_max_id_heap(key_order_type order = key_order_type()) + :heap(inverted_order(std::move(order))){} + + //! Removes all elements from the heap. + void clear(){ + heap.clear(); + } + + //! Initializes the heap with a range over pair elements. + //! This function has linear running time, whereas repeated push operations have a total running time in O(n log n) + template + void fill(const Range&range){ + heap.fill(range); + } + + //! Rebuilds the queue with a different order + void reorder(key_order_type new_order){ + heap.reorder(inverted_order(std::move(new_order))); + } + + void reset(int new_id_count = 0, key_order_type new_order = key_order_type()){ + heap.reset(new_id_count, inverted_order(std::move(new_order))); + } + + void reset(key_order_type new_order){ + heap.reset(std::move(new_order)); + } + + bool empty()const{ + return heap.empty(); + } + + int size()const{ + return heap.size(); + } + + bool contains(int id)const{ + return heap.contains(id); + } + + const key_type&get_key(int id)const{ + return heap.get_key(id); + } + + void push(int id, key_type key){ + heap.push(id, std::move(key)); + } + + //! Returns true if the element moved its position within the heap. + bool push_or_decrease_key(int id, key_type key){ + return heap.push_or_increase_key(id, std::move(key)); + } + + //! Returns true if the element moved its position within the heap. + bool push_or_increase_key(int id, key_type key){ + return heap.push_or_decrease_key(id, std::move(key)); + } + + //! Returns true if the element moved its position within the heap. + bool push_or_set_key(int id, key_type key){ + return heap.push_or_set_key(id, std::move(key)); + } + + key_type peek_max_key()const{ + return heap.peek_min_key(); + } + + int peek_max_id()const{ + return heap.peek_min_id(); + } + + int pop(){ + return heap.pop(); + } + +private: + struct inverted_order{ + inverted_order(){} + inverted_order(key_order_type order):order(std::move(order)){} + bool operator()(const key_type&l, const key_type&r){ + return order(r, l); + } + key_order_type order; + }; + kway_min_id_heapheap; +}; + +const int standard_heap_arity = 4; + +template> +class min_id_heap : public kway_min_id_heap{ +private: + typedef kway_min_id_heap super_type; +public: + typedef typename super_type::key_order_type key_order_type; + + explicit min_id_heap(int id_count, key_order_type order = key_order_type()) + :super_type(id_count, std::move(order)){} + + explicit min_id_heap(key_order_type order = key_order_type()) + :super_type(std::move(order)){} +}; + +template> +class max_id_heap : public kway_max_id_heap{ +private: + typedef kway_max_id_heap super_type; +public: + typedef typename super_type::key_order_type key_order_type; + + explicit max_id_heap(int id_count, key_order_type order = key_order_type()) + :super_type(id_count, std::move(order)){} + + explicit max_id_heap(key_order_type order = key_order_type()) + :super_type(std::move(order)){} +}; + +#endif + diff --git a/solvers/flow-cutter-pace17/src/histogram.h b/solvers/flow-cutter-pace17/src/histogram.h new file mode 100644 index 0000000..eadb33f --- /dev/null +++ b/solvers/flow-cutter-pace17/src/histogram.h @@ -0,0 +1,66 @@ +#ifndef HISTOGRAM_H +#define HISTOGRAM_H + +#include "id_func_traits.h" +#include "array_id_func.h" +#include + +template +typename std::enable_if< + is_id_id_func::value, + ArrayIDFunc +>::type compute_histogram(const IDIDFunc&f){ + ArrayIDFunch(f.image_count()); + h.fill(0); + for(int i=0; i +typename std::enable_if< + is_id_func::value, + int +>::type max_histogram_id(const IDFunc&h){ + assert(h.preimage_count() != 0); + + typename id_func_image_type::type max_element = h(0); + int max_id = 0; + for(int i=1; i +typename std::enable_if< + is_id_func::value, + int +>::type min_histogram_id(const IDFunc&h){ + assert(h.preimage_count() != 0); + + typename id_func_image_type::type min_element = h(0); + int min_id = 0; + for(int i=1; i element){ + min_element = element; + min_id = i; + } + } + #ifndef NDEBUG + for(int i=0; i= h(min_id)); + #endif + return min_id; +} + +#endif diff --git a/solvers/flow-cutter-pace17/src/id_func.h b/solvers/flow-cutter-pace17/src/id_func.h new file mode 100644 index 0000000..d1a4229 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/id_func.h @@ -0,0 +1,119 @@ +#ifndef ID_FUNC_H +#define ID_FUNC_H + +#include +#include +#include "id_func_traits.h" +#include + +template +struct LambdaIDFunc{ + int preimage_count()const{return preimage_count_;} + + typename id_func_image_type::type operator()(int id)const{ + assert(0 <= id && id <= preimage_count_ && "id out of bounds"); + return func_(id); + } + + int preimage_count_; + Func func_; +}; + +template +typename std::enable_if< + has_int_call_operator::value, + LambdaIDFunc +>::type id_func(int preimage_count, Func func){ + return {preimage_count, std::move(func)}; +} + +template +struct LambdaIDIDFunc{ + static_assert(std::is_same::type>::value, "IDIDFunc must return int"); + + int preimage_count()const{return id_func_.preimage_count();} + int image_count()const{return image_count_;} + + int operator()(int preimage)const{ + assert(0 <= preimage && preimage <= preimage_count() && "preimage out of bounds"); + int image = id_func_(preimage); + assert(0 <= image && image <= image_count() && "image out of bounds"); + return image; + } + + int image_count_; + IDFunc id_func_; + +}; + +template +typename std::enable_if< + is_id_func::value, + LambdaIDIDFunc +>::type id_id_func(int image_count, IDFunc func){ + return {image_count, std::move(func)}; +} + +template +typename std::enable_if< + has_int_call_operator::value, + LambdaIDIDFunc> +>::type id_id_func(int preimage_count, int image_count, Func func){ + return {image_count, id_func(preimage_count, std::move(func))}; +} + +template +struct ConstIntIDFunc{ + explicit ConstIntIDFunc(int preimage_count):preimage_count_(preimage_count){} + + int preimage_count()const{ + return preimage_count_; + } + + int operator()(int)const{ + return value; + } + + int preimage_count_; +}; + +template +class ConstRefIDIDFunc{ +public: + ConstRefIDIDFunc():ptr(0){} + ConstRefIDIDFunc(const IDIDFunc&x):ptr(&x){} + + int preimage_count()const{ return ptr->preimage_count(); } + int image_count()const{return ptr->image_count(); } + int operator()(int x)const{return (*ptr)(x);} + +private: + const IDIDFunc*ptr; +}; + +template +ConstRefIDIDFuncmake_const_ref_id_id_func(const IDIDFunc&f){ + return {f}; +} + +template +class ConstRefIDFunc{ +public: + ConstRefIDFunc():ptr(0){} + ConstRefIDFunc(const IDFunc&x):ptr(&x){} + + int preimage_count()const{ return ptr->preimage_count(); } + decltype(std::declval()(0)) operator()(int x)const{return (*ptr)(x);} + +private: + const IDFunc*ptr; +}; + +template +ConstRefIDFuncmake_const_ref_id_func(const IDFunc&f){ + return {f}; +} + + +#endif + diff --git a/solvers/flow-cutter-pace17/src/id_func_traits.h b/solvers/flow-cutter-pace17/src/id_func_traits.h new file mode 100644 index 0000000..96f008e --- /dev/null +++ b/solvers/flow-cutter-pace17/src/id_func_traits.h @@ -0,0 +1,76 @@ +#ifndef ID_FUNC_TRAITS_H +#define ID_FUNC_TRAITS_H + +#include + +template +struct id_func_image_type{ + typedef typename std::decay()(0))>::type type; +}; + +#define MAKE_TYPED_HAS_TRAIT(HAS, TYPE, EXPR)\ + templatestd::false_type HAS##_impl2(...);\ + templatestd::true_type HAS##_impl2(typename std::decay::type*);\ + templateauto HAS##_impl1()->decltype(HAS##_impl2(static_cast(nullptr)));\ + templatestruct HAS : std::enable_if())>::type{}; + +#define MAKE_UNTYPED_HAS_TRAIT(HAS, EXPR)\ + templatestd::false_type HAS##_impl2(...);\ + templatestd::true_type HAS##_impl2(typename std::decay::type*);\ + templateauto HAS##_impl1()->decltype(HAS##_impl2(nullptr));\ + templatestruct HAS : std::enable_if())>::type{}; + +#define F std::declval() +#define IT typename id_func_image_type::type +#define I std::declval() + +// F is an instance of the function, FT is the function type, I is an instance of the function's image type and IT is the image type + +MAKE_TYPED_HAS_TRAIT (has_preimage_count, int, F.preimage_count() ) +MAKE_TYPED_HAS_TRAIT (has_image_count, int, F.image_count() ) +MAKE_UNTYPED_HAS_TRAIT(has_int_call_operator, F(0) ) +MAKE_TYPED_HAS_TRAIT (has_int_int_call_operator, int, F(0) ) +MAKE_TYPED_HAS_TRAIT (has_set, void, F.set(0, I) ) +MAKE_TYPED_HAS_TRAIT (has_fill, void, F.fill(I) ) +MAKE_TYPED_HAS_TRAIT (has_move, IT, F.move(0) ) +MAKE_TYPED_HAS_TRAIT (has_set_image_count, void, F.set_image_count(0) ) + +#define MAKE_BOOL_TRAIT(NAME, EXPR)\ + template struct NAME : std::integral_constant{}; + +MAKE_BOOL_TRAIT(is_id_func, + has_preimage_count::value + && has_int_call_operator::value +) +MAKE_BOOL_TRAIT(is_id_id_func, + has_preimage_count::value + && has_image_count::value && + has_int_int_call_operator::value +) +MAKE_BOOL_TRAIT(is_only_id_func, + !is_id_id_func::value + && is_id_func::value +) +MAKE_BOOL_TRAIT(is_mutable_id_func, + is_id_func::value + && has_set::value + && has_fill::value + && has_move::value +) +MAKE_BOOL_TRAIT(is_mutable_id_id_func, + is_id_id_func::value + && has_set::value + && has_fill::value + && has_move::value + && has_set_image_count::value +) + +#undef F +#undef IT +#undef I +#undef MAKE_TYPED_HAS_TRAIT +#undef MAKE_UNTYPED_HAS_TRAIT +#undef MAKE_BOOL_TRAIT + + +#endif diff --git a/solvers/flow-cutter-pace17/src/id_multi_func.h b/solvers/flow-cutter-pace17/src/id_multi_func.h new file mode 100644 index 0000000..fb949f3 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/id_multi_func.h @@ -0,0 +1,146 @@ +#ifndef ID_MULTI_FUNC_H +#define ID_MULTI_FUNC_H + +#include "array_id_func.h" +#include "count_range.h" +#include "range.h" +#include "chain.h" +#include + +struct RangeIDIDMultiFunc{ + int preimage_count()const{ return range_begin.preimage_count()-1; } + int image_count()const{ return range_begin(preimage_count()); } + + CountRange operator()(int id)const{ + assert(0 <= id && id < preimage_count() && "id out of bounds"); + return count_range(range_begin(id), range_begin(id+1)); + } + + ArrayIDFuncrange_begin; +}; + + +template +struct ArrayIDMultiFunc{ + int preimage_count()const{ return preimage_to_intermediate.preimage_count(); } + + Rangeoperator()(int id){ + assert(0 <= id && id < preimage_count() && "id out of bounds"); + return { + intermediate_to_image.begin() + *std::begin(preimage_to_intermediate(id)), + intermediate_to_image.begin() + *std::end(preimage_to_intermediate(id)) + }; + } + + Rangeoperator()(int id)const{ + assert(0 <= id && id < preimage_count() && "id out of bounds"); + return { + intermediate_to_image.begin() + *std::begin(preimage_to_intermediate(id)), + intermediate_to_image.begin() + *std::end(preimage_to_intermediate(id)) + }; + } + + RangeIDIDMultiFunc preimage_to_intermediate; + ArrayIDFuncintermediate_to_image; +}; + + +struct ArrayIDIDMultiFunc{ + int image_count()const{ return intermediate_to_image.image_count(); } + int preimage_count()const{ return preimage_to_intermediate.preimage_count(); } + + Rangeoperator()(int id){ + assert(0 <= id && id < preimage_count() && "id out of bounds"); + return { + intermediate_to_image.begin() + *std::begin(preimage_to_intermediate(id)), + intermediate_to_image.begin() + *std::end(preimage_to_intermediate(id)) + }; + } + + Rangeoperator()(int id)const{ + assert(0 <= id && id < preimage_count() && "id out of bounds"); + return { + intermediate_to_image.begin() + *std::begin(preimage_to_intermediate(id)), + intermediate_to_image.begin() + *std::end(preimage_to_intermediate(id)) + }; + } + + RangeIDIDMultiFunc preimage_to_intermediate; + ArrayIDIDFunc intermediate_to_image; +}; + +//! Inverts an id-id function f. In this context we have two ID types: preimage IDs +//! and image IDs. f maps preimage IDs onto image IDs. This function computes a +//! id-id multi function g that maps image IDs onto preimage ID ranges. +//! g(x) is a range of all y such that f(y) = x ordered increasing by y. +template +ArrayIDIDMultiFunc invert_id_id_func(const IDIDFunc&f){ + ArrayIDIDMultiFunc g = { + RangeIDIDMultiFunc{ + ArrayIDFunc{f.image_count()+1} + }, + ArrayIDIDFunc{f.preimage_count(), f.preimage_count()} + }; + + auto&begin = g.preimage_to_intermediate.range_begin; + begin.fill(0); + + for(int i=0; i +RangeIDIDMultiFunc invert_sorted_id_id_func(const IDIDFunc&f){ + assert(std::is_sorted(f.begin(), f.end()) && "f is not sorted"); + + RangeIDIDMultiFunc h = {ArrayIDFunc{f.image_count()+1}}; + + auto&begin = h.range_begin; + begin.fill(0); + + for(int i=0; i +ArrayIDIDMultiFunc compute_successor_function(const Tail&tail, const Head&head){ + auto x = invert_id_id_func(tail); + x.intermediate_to_image = chain(x.intermediate_to_image, head); + return x; // NRVO +} + +#endif + diff --git a/solvers/flow-cutter-pace17/src/id_sort.h b/solvers/flow-cutter-pace17/src/id_sort.h new file mode 100644 index 0000000..4c038d8 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/id_sort.h @@ -0,0 +1,39 @@ +#ifndef ID_SORT_H +#define ID_SORT_H + +#include "array_id_func.h" +#include + +template +void stable_sort_copy_by_id( + InIter in_begin, InIter in_end, + OutIter out_iter, + int id_count, const GetID&get_id +){ + ArrayIDFuncpos(id_count); + pos.fill(0); + for(InIter i=in_begin; i!=in_end; ++i) + ++pos[get_id(*i)]; + + int sum = 0; + for(int i=0; i +void stable_sort_copy_by_id( + InIter in_begin, InIter in_end, + OutIter out_iter, + const GetID&get_id +){ + stable_sort_copy_by_id(in_begin, in_end, out_iter, get_id.image_count(), get_id); +} + +#endif + diff --git a/solvers/flow-cutter-pace17/src/io_helper.h b/solvers/flow-cutter-pace17/src/io_helper.h new file mode 100644 index 0000000..0d970c2 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/io_helper.h @@ -0,0 +1,36 @@ +#ifndef IO_HELPER_H +#define IO_HELPER_H + +#include +#include +#include +#include +#include + +template +void save_text_file(const std::string&file_name, const SaveFunc&save, Args&&...args){ + if(file_name == "-"){ + save(std::cout, std::forward(args)...); + std::cout << std::flush; + } else if(file_name == "-null"){ + } else { + std::ofstream out(file_name); + if(!out) + throw std::runtime_error("Could not open "+file_name+" for text writing"); + save(out, std::forward(args)...); + } +} + +template +auto load_uncached_text_file(const std::string&file_name, const LoadFunc&load)->decltype(load(std::cin)){ + if(file_name == "-"){ + return load(std::cin); + } else { + std::ifstream in(file_name); + if(!in) + throw std::runtime_error("Could not load "+file_name+" for text reading"); + return load(in); + } +} + +#endif diff --git a/solvers/flow-cutter-pace17/src/list_graph.cpp b/solvers/flow-cutter-pace17/src/list_graph.cpp new file mode 100644 index 0000000..5ab27e0 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/list_graph.cpp @@ -0,0 +1,65 @@ +#include "list_graph.h" +#include "io_helper.h" +#include "multi_arc.h" +#include "id_multi_func.h" + +#include +#include +#include +#include + +static +ListGraph load_pace_graph_impl(std::istream&in){ + ListGraph graph; + std::string line; + int line_num = 0; + int next_arc = 0; + + bool was_header_read = false; + while(std::getline(in, line)){ + ++line_num; + if(line.empty() || line[0] == 'c') + continue; + + std::istringstream lin(line); + if(!was_header_read){ + was_header_read = true; + std::string p, sp; + int node_count; + int arc_count; + if(!(lin >> p >> sp >> node_count >> arc_count)) + throw std::runtime_error("Can not parse header in pace file."); + if(p != "p" || sp != "tw" || node_count < 0 || arc_count < 0) + throw std::runtime_error("Invalid header in pace file."); + graph = ListGraph(node_count, 2*arc_count); + }else{ + int h, t; + if(!(lin >> t >> h)) + throw std::runtime_error("Can not parse line num "+std::to_string(line_num)+" \""+line+"\" in pace file."); + --h; + --t; + if(h < 0 || h >= graph.node_count() || t < 0 || t >= graph.node_count()) + throw std::runtime_error("Invalid arc in line num "+std::to_string(line_num)+" \""+line+"\" in pace file."); + if(next_arc < graph.arc_count()){ + graph.head[next_arc] = h; + graph.tail[next_arc] = t; + } + ++next_arc; + if(next_arc < graph.arc_count()){ + graph.head[next_arc] = t; + graph.tail[next_arc] = h; + } + ++next_arc; + } + } + + if(next_arc != graph.arc_count()) + throw std::runtime_error("The arc count in the header ("+std::to_string(graph.arc_count())+") does not correspond with the actual number of arcs ("+std::to_string(next_arc)+")."); + + return graph; // NVRO +} + +ListGraph uncached_load_pace_graph(const std::string&file_name){ + return load_uncached_text_file(file_name, load_pace_graph_impl); +} + diff --git a/solvers/flow-cutter-pace17/src/list_graph.h b/solvers/flow-cutter-pace17/src/list_graph.h new file mode 100644 index 0000000..5554217 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/list_graph.h @@ -0,0 +1,22 @@ +#ifndef LIST_GRAPH_H +#define LIST_GRAPH_H + +#include "array_id_func.h" + +#include + +struct ListGraph{ + ListGraph()=default; + ListGraph(int node_count, int arc_count) + :head(arc_count, node_count), tail(arc_count, node_count){} + + int node_count()const{ return head.image_count(); } + int arc_count()const{ return head.preimage_count(); } + + ArrayIDIDFunc head, tail; +}; + +ListGraph uncached_load_pace_graph(const std::string&file_name); + + +#endif diff --git a/solvers/flow-cutter-pace17/src/min_max.h b/solvers/flow-cutter-pace17/src/min_max.h new file mode 100644 index 0000000..5bac645 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/min_max.h @@ -0,0 +1,75 @@ +#ifndef MIN_MAX_H +#define MIN_MAX_H + +#include +#include +#include "id_func_traits.h" + +template +void min_to(T&x, T y){ + if(y < x) + x = std::move(y); +} + +template +void max_to(T&x, T y){ + if(y > x) + x = std::move(y); +} + +template +void sort_ref_args(T&x, T&y){ + if(y < x) + std::swap(x,y); +} + +template +typename id_func_image_type::type min_over_id_func(const F&f){ + assert(f.preimage_count() != 0); + typename id_func_image_type::type result = f(0); + for(int i=1; i +typename id_func_image_type::type max_over_id_func(const F&f){ + assert(f.preimage_count() != 0); + typename id_func_image_type::type result = f(0); + for(int i=1; i +int min_preimage_over_id_func(const F&f){ + assert(f.preimage_count() != 0); + int preimage = 0; + typename id_func_image_type::type m = f(0); + for(int i=1; i +int max_preimage_over_id_func(const F&f){ + assert(f.preimage_count() != 0); + int preimage = 0; + typename id_func_image_type::type m = f(0); + for(int i=1; i +BitIDFunc identify_non_multi_arcs(const Tail&tail, const Head&head){ + const int arc_count = tail.preimage_count(); + auto arc_list = sort_arcs_first_by_tail_second_by_head(tail, head); + BitIDFunc is_non_multi_arc(arc_count); + if(arc_count > 0){ + is_non_multi_arc.set(arc_list[0], true); + for(int i=1; i +bool is_symmetric(const Tail&tail, const Head&head){ + const int arc_count = tail.preimage_count(); + auto forward_arc_list = sort_arcs_first_by_tail_second_by_head(tail, head); + auto backward_arc_list = sort_arcs_first_by_tail_second_by_head(head, tail); + + for(int i=0; i +class SymmetricHead{ +public: + SymmetricHead(Tail tail, Head head): + tail(std::move(tail)), head(std::move(head)){} + + int preimage_count()const{ return 2*tail.preimage_count(); } + int image_count()const {return tail.image_count(); } + + int operator()(int x)const{ + if(x < tail.preimage_count()) + return head(x); + else + return tail(x - tail.preimage_count()); + } +private: + Tail tail; + Head head; +}; + +template +SymmetricHeadmake_symmetric_head(Tail tail, Head head){ + return {std::move(tail), std::move(head)}; +} + +template +class SymmetricTail{ +public: + SymmetricTail(Tail tail, Head head): + tail(std::move(tail)), head(std::move(head)){} + + int preimage_count()const{ return 2*tail.preimage_count(); } + int image_count()const {return tail.image_count(); } + + int operator()(int x)const{ + if(x < tail.preimage_count()) + return tail(x); + else + return head(x - tail.preimage_count()); + } +private: + Tail tail; + Head head; +}; + +template +SymmetricTailmake_symmetric_tail(Tail tail, Head head){ + return {std::move(tail), std::move(head)}; +} + +template +bool has_multi_arcs(const Tail&tail, const Head&head){ + return count_true(identify_non_multi_arcs(tail, head)) != tail.preimage_count(); +} + +template +bool is_loop_free(const Tail&tail, const Head&head){ + for(int i=0; iy two new inter arcs x_out -> y_in and y_in -> x_out are created + //! For each node x two new intra arcs x_out -> x_in and x_in -> x_out are created + //! Every in to out arc has capacity 1 and every out to in arc has capacity 0. + //! + + namespace expanded_graph{ + inline int expanded_node_count(int original_node_count){ return 2*original_node_count; } + inline int expanded_arc_count(int original_node_count, int original_arc_count){ return 2*(original_node_count+original_arc_count); } + inline int expanded_node_to_original_node(int x){ return x/2; } + inline int original_node_to_expanded_node(int x, bool is_out){ return 2*x+is_out; } + inline bool get_expanded_node_out_flag(int x){ return x&1; } + inline bool is_expanded_intra_arc(int x, int original_arc_count){ return x >= 2*original_arc_count; } + inline bool is_expanded_inter_arc(int x, int original_arc_count){ return x < 2*original_arc_count; } + inline bool get_expanded_arc_tail_out_flag(int x){return x&1;} + inline int expanded_inter_arc_to_original_arc(int x, int original_arc_count){ (void)original_arc_count; return x/2; } + inline int original_arc_to_expanded_inter_arc(int x, bool tail_out_flag, int original_arc_count){ return 2*x+tail_out_flag; } + inline int expanded_intra_arc_to_original_node(int x, int original_arc_count){ (void)original_arc_count; return x/2-original_arc_count; } + inline int original_node_to_expanded_intra_arc(int x, bool tail_out_flag, int original_arc_count){ return 2*(original_arc_count+x)+tail_out_flag; } + + template + struct Tail{ + int original_node_count, original_arc_count; + OriginalTail original_tail; + + int preimage_count()const{return expanded_arc_count(original_node_count, original_arc_count);} + int image_count()const{return expanded_node_count(original_node_count);} + + int operator()(int a)const{ + if(is_expanded_intra_arc(a, original_arc_count)){ + return original_node_to_expanded_node(expanded_intra_arc_to_original_node(a, original_arc_count), get_expanded_arc_tail_out_flag(a)); + }else{ + return original_node_to_expanded_node(original_tail(expanded_inter_arc_to_original_arc(a, original_arc_count)), get_expanded_arc_tail_out_flag(a)); + } + } + }; + + template + Tailtail(int original_node_count, int original_arc_count, OriginalTail original_tail){ + return {original_node_count, original_arc_count, std::move(original_tail)}; + } + + template + struct Head{ + int original_node_count, original_arc_count; + OriginalHead original_head; + + int preimage_count()const{return expanded_arc_count(original_node_count, original_arc_count);} + int image_count()const{return expanded_node_count(original_node_count);} + + int operator()(int a)const{ + if(is_expanded_intra_arc(a, original_arc_count)){ + return original_node_to_expanded_node(expanded_intra_arc_to_original_node(a, original_arc_count), !get_expanded_arc_tail_out_flag(a)); + }else{ + return original_node_to_expanded_node(original_head(expanded_inter_arc_to_original_arc(a, original_arc_count)), !get_expanded_arc_tail_out_flag(a)); + } + } + }; + + template + Headhead(int original_node_count, int original_arc_count, OriginalHead original_head){ + return {original_node_count, original_arc_count, std::move(original_head)}; + } + + template + struct BackArc{ + int original_node_count, original_arc_count; + OriginalBackArc original_back_arc; + + int preimage_count()const{return expanded_arc_count(original_node_count, original_arc_count);} + int image_count()const{return expanded_arc_count(original_node_count, original_arc_count);} + + int operator()(int a)const{ + if(is_expanded_intra_arc(a, original_arc_count)){ + return original_node_to_expanded_intra_arc(expanded_intra_arc_to_original_node(a, original_arc_count), !get_expanded_arc_tail_out_flag(a), original_arc_count); + }else{ + return original_arc_to_expanded_inter_arc(original_back_arc(expanded_inter_arc_to_original_arc(a, original_arc_count)), !get_expanded_arc_tail_out_flag(a), original_arc_count); + } + } + }; + + template + BackArcback_arc(int original_node_count, int original_arc_count, OriginalBackArc original_back_arc){ + return {original_node_count, original_arc_count, std::move(original_back_arc)}; + } + + struct Capacity{ + int original_node_count, original_arc_count; + + int preimage_count()const{return expanded_arc_count(original_node_count, original_arc_count);} + + int operator()(int a)const{ + if(is_expanded_intra_arc(a, original_arc_count)){ + return !get_expanded_arc_tail_out_flag(a); + }else{ + return get_expanded_arc_tail_out_flag(a); + } + } + }; + + Capacity capacity(int original_node_count, int original_arc_count){ + return {original_node_count, original_arc_count}; + } + + template + struct OutArcIter{ + + typedef typename std::decay()(0)))>::type OriginalOutArcIter; + + typedef int value_type; + typedef int difference_type; + typedef const int* pointer; + typedef const int& reference; + typedef std::forward_iterator_tag iterator_category; + + OutArcIter(){} + + OutArcIter(int intra_arc, bool node_out_flag, OriginalOutArcIter base_iter): + intra_arc(intra_arc), node_out_flag(node_out_flag), base_iter(base_iter){} + + OutArcIter&operator++(){ + if(intra_arc != -1) + intra_arc = -1; + else + ++base_iter; + return *this; + } + + OutArcIter operator++(int) { + OutArcIter tmp(*this); + operator++(); + return tmp; + } + + int operator*()const{ + if(intra_arc != -1) + return intra_arc; + else + return 2*(*base_iter) + node_out_flag; + } + + int dummy; // To whom ever defined the ackward operator-> semantics: Skrew you! + const int*operator->() const { + dummy = *this; + return &dummy; + } + + int intra_arc; + bool node_out_flag; + OriginalOutArcIter base_iter; + + friend bool operator==(OutArcIter l, OutArcIter r){ + return l.base_iter == r.base_iter && l.intra_arc == r.intra_arc && l.node_out_flag == r.node_out_flag; + } + + friend bool operator!=(OutArcIter l, OutArcIter r){ + return !(l == r); + } + }; + + template + struct OutArc{ + int original_node_count, original_arc_count; + OriginalOutArc original_out_arc; + + int preimage_count()const{return expanded_node_count(original_node_count);} + + Range> operator()(int x)const{ + int original_x = expanded_node_to_original_node(x); + bool out_flag = get_expanded_node_out_flag(x); + auto r = original_out_arc(original_x); + return Range>{ + OutArcIter{original_node_to_expanded_intra_arc(original_x, out_flag, original_arc_count), out_flag, std::begin(r)}, + OutArcIter{-1, out_flag, std::end(r)} + }; + } + }; + + template + OutArcout_arc(int original_node_count, int original_arc_count, OriginalOutArc original_out_arc){ + return {original_node_count, original_arc_count, std::move(original_out_arc)}; + } + + + template + Graph< + expanded_graph::Tail, + expanded_graph::Head, + expanded_graph::BackArc, + expanded_graph::Capacity, + expanded_graph::OutArc + > + make_graph(Tail tail, Head head, BackArc back_arc, OutArc out_arc){ + int node_count = tail.image_count(), arc_count = tail.preimage_count(); + return{ + expanded_graph::tail(node_count, arc_count, std::move(tail)), + expanded_graph::head(node_count, arc_count, std::move(head)), + expanded_graph::back_arc(node_count, arc_count, std::move(back_arc)), + expanded_graph::capacity(node_count, arc_count), + expanded_graph::out_arc(node_count, arc_count, std::move(out_arc)) + }; + } + + struct MixedCut{ + std::vectorarcs, nodes; + }; + + inline + MixedCut expanded_cut_to_original_mixed_cut(const std::vector&expanded_cut, int original_arc_count){ + MixedCut original_cut; + + for(auto x:expanded_cut){ + if(is_expanded_inter_arc(x, original_arc_count)){ + original_cut.arcs.push_back(expanded_inter_arc_to_original_arc(x, original_arc_count)); + }else{ + original_cut.nodes.push_back(expanded_intra_arc_to_original_node(x, original_arc_count)); + } + } + + return original_cut; // NRVO + } + + struct Separator{ + std::vectorsep; + int small_side_size; + }; + + template + Separator extract_original_separator(const Tail&tail, const Head&head, const FlowCutter&cutter){ + int original_node_count = tail.image_count(); + int original_arc_count = tail.preimage_count(); + + Separator sep; + + for(auto x:cutter.get_current_cut()){ + if(is_expanded_intra_arc(x, original_arc_count)){ + sep.sep.push_back(expanded_intra_arc_to_original_node(x, original_arc_count)); + } + } + + int left_side_size = (cutter.get_current_smaller_cut_side_size()-sep.sep.size())/2; + int right_side_size = original_node_count - sep.sep.size() - left_side_size; + + auto is_original_node_left = [&](int x){ + return cutter.is_on_smaller_side(original_node_to_expanded_node(x, true)); + }; + + for(auto x:cutter.get_current_cut()){ + if(is_expanded_inter_arc(x, original_arc_count)){ + auto lr = expanded_inter_arc_to_original_arc(x, original_arc_count); + auto l = tail(lr), r = head(lr); + if(is_original_node_left(r)) + std::swap(l, r); + + if(left_side_size > right_side_size){ + sep.sep.push_back(l); + --left_side_size; + }else{ + sep.sep.push_back(r); + --right_side_size; + } + } + } + + sep.small_side_size = std::min(left_side_size, right_side_size); + + std::sort(sep.sep.begin(), sep.sep.end()); + sep.sep.erase(std::unique(sep.sep.begin(), sep.sep.end()), sep.sep.end()); + + return sep; // NVRO + } + + inline + std::vectorexpand_source_target_pair_list(std::vectorp){ + for(auto&x:p){ + x.source = original_node_to_expanded_node(x.source, false); + x.target = original_node_to_expanded_node(x.target, true); + } + return std::move(p); + } + } + + +} + +#endif diff --git a/solvers/flow-cutter-pace17/src/pace.cpp b/solvers/flow-cutter-pace17/src/pace.cpp new file mode 100644 index 0000000..f45fe31 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/pace.cpp @@ -0,0 +1,597 @@ + +#include "id_func.h" +#include "list_graph.h" +#include "multi_arc.h" +#include "sort_arc.h" +#include "chain.h" +#include "union_find.h" +#include "node_flow_cutter.h" +#include "separator.h" +#include "id_multi_func.h" +#include "filter.h" +#include "preorder.h" +#include "contraction_graph.h" +#include "tree_decomposition.h" +#include "greedy_order.h" +#include "min_max.h" + +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include +using namespace std; + +ArrayIDIDFunc tail, head; +const char*volatile best_decomposition = 0; +int best_bag_size = numeric_limits::max(); +int print_below; + +void ignore_return_value(long long){} + +unsigned long long get_milli_time(){ + struct timeval tv; + gettimeofday(&tv, NULL); + return (unsigned long long)(tv.tv_sec) * 1000 + + (unsigned long long)(tv.tv_usec) / 1000; +} + +// This hack is actually standard compilant +template +S& access_internal_vector(std::priority_queue& q) { + struct Hacked : private priority_queue { + static S& access(priority_queue& q) { + return q.*&Hacked::c; + } + }; + return Hacked::access(q); +} + +void print_comment(std::string msg){ + msg = "c "+std::move(msg) + "\n"; + ignore_return_value(write(STDOUT_FILENO, msg.data(), msg.length())); +} + +template +void check_multilevel_partition_invariants(const Tail&tail, const Head&head, const std::vector&multilevel_partition){ + #ifndef NDEBUG + const int node_count = tail.image_count(); + const int arc_count = tail.preimage_count(); + + auto is_child_of = [&](int c, int p){ + for(;;){ + if(c == p) + return true; + if(c == -1) + return false; + c = multilevel_partition[c].parent_cell; + } + }; + + auto are_ordered = [&](int a, int b){ + return is_child_of(a, b) || is_child_of(b, a); + }; + + ArrayIDFunc cell_of_node(node_count); + cell_of_node.fill(-1); + + for(int i=0; i<(int)multilevel_partition.size(); ++i){ + for(auto&y:multilevel_partition[i].separator_node_list){ + assert(cell_of_node(y) == -1); + cell_of_node[y] = i; + } + } + + for(auto x:cell_of_node) + assert(x != -1); + + for(int xy = 0; xy < arc_count; ++xy){ + int x = cell_of_node(tail(xy)), y = cell_of_node(head(xy)); + assert(are_ordered(x, y)); + } + #endif +} + +template +void compute_multilevel_partition(const Tail&tail, const Head&head, const ComputeSeparator&compute_separator, int smallest_known_treewidth, const OnNewMP&on_new_multilevel_partition){ + + const int node_count = tail.image_count(); + const int arc_count = tail.preimage_count(); + + std::vectorclosed_cells; + std::priority_queueopen_cells; + + { + Cell top_level_cell; + top_level_cell.separator_node_list.resize(node_count); + for(int i=0; icells = closed_cells; + for(auto&q:access_internal_vector(open_cells)) + cells.push_back(q); + check_multilevel_partition_invariants(tail, head, cells); + on_new_multilevel_partition(cells, open_cells.empty() || max_closed_bag_size>=max_open_bag_size); + } + }; + + check_if_better(); + + + ArrayIDFuncnode_to_sub_node(node_count); + node_to_sub_node.fill(-1); + + auto inv_tail = invert_sorted_id_id_func(tail); + + BitIDFunc in_child_cell(node_count); + in_child_cell.fill(false); + + while(!open_cells.empty()){ + + #ifndef NDEBUG + + int real_max_closed_bag_size = 0; + for(auto&x:closed_cells) + max_to(real_max_closed_bag_size, x.bag_size()); + assert(max_closed_bag_size == real_max_closed_bag_size); + + int real_max_open_bag_size = 0; + for(auto&x:access_internal_vector(open_cells)) + max_to(real_max_open_bag_size, x.bag_size()); + assert(max_open_bag_size == real_max_open_bag_size); + + #endif + + auto current_cell = std::move(open_cells.top()); + open_cells.pop(); + + bool must_recompute_max_open_bag_size = (current_cell.bag_size() == max_open_bag_size); + + int closed_cell_id = closed_cells.size(); + + if(current_cell.bag_size() > max_closed_bag_size){ + + auto interior_node_list = std::move(current_cell.separator_node_list); + int interior_node_count = interior_node_list.size(); + + ArrayIDFuncsub_node_to_node(interior_node_count); + + int next_sub_id = 0; + for(int x:interior_node_list){ + node_to_sub_node[x] = next_sub_id; + sub_node_to_node[next_sub_id] = x; + ++next_sub_id; + } + + auto is_node_interior = id_func( + node_count, + [&](int x)->bool{ + return node_to_sub_node(x) != -1; + } + ); + + auto is_arc_interior = id_func( + arc_count, + [&](int xy)->bool{ + return is_node_interior(tail(xy)) && is_node_interior(head(xy)); + } + ); + + int interior_arc_count = count_true(is_arc_interior); + auto sub_tail = keep_if(is_arc_interior, interior_arc_count, tail); + auto sub_head = keep_if(is_arc_interior, interior_arc_count, head); + + for(auto&x:sub_tail) + x = node_to_sub_node(x); + sub_tail.set_image_count(interior_node_count); + + for(auto&x:sub_head) + x = node_to_sub_node(x); + sub_head.set_image_count(interior_node_count); + + auto sub_separator = compute_separator(sub_tail, sub_head); + + BitIDFunc is_in_sub_separator(interior_node_count); + is_in_sub_separator.fill(false); + for(auto x:sub_separator) + is_in_sub_separator.set(x, true); + + UnionFind uf(interior_node_count); + + for(int xy=0; xy>nodes_of_representative(interior_node_count); + for(int x=0; xbool{ + for(auto xy:inv_tail(x)) + if(in_child_cell(head(xy))) + return false; + return true; + } + ), + new_cell.boundary_node_list.end() + ); + for(auto x:new_cell.separator_node_list) + in_child_cell.set(x, false); + } + + new_cell.separator_node_list.shrink_to_fit(); + new_cell.boundary_node_list.shrink_to_fit(); + + if(new_cell.bag_size() > max_open_bag_size) + max_open_bag_size = new_cell.bag_size(); + + open_cells.push(std::move(new_cell)); + } + } + + current_cell.separator_node_list = std::move(separator); + current_cell.separator_node_list.shrink_to_fit(); + + for(int x:interior_node_list) + node_to_sub_node[x] = -1; + } + + if(current_cell.bag_size() > max_closed_bag_size) + max_closed_bag_size = current_cell.bag_size(); + + if(must_recompute_max_open_bag_size){ + max_open_bag_size = 0; + for(auto&x:access_internal_vector(open_cells)) + if(x.bag_size() > max_open_bag_size) + max_open_bag_size = x.bag_size(); + } + + closed_cells.push_back(std::move(current_cell)); + + check_if_better(); + + if(max_closed_bag_size >= smallest_known_treewidth){ + return; + } + + if(max_closed_bag_size >= max_open_bag_size){ + return; + } + } +} + +ArrayIDIDFunc preorder, inv_preorder; + +std::string format_multilevel_partition_as_tree_decomposition(const std::vector&cell_list){ + std::ostringstream out; + print_tree_decompostion_of_multilevel_partition(out, tail, head, preorder, cell_list); + return out.str(); +} + + +char no_decomposition_message[] = "c info programm was aborted before any decomposition was computed\n"; + +void signal_handler(int) +{ + const char*x = best_decomposition; + if(x != 0) + ignore_return_value(write(STDOUT_FILENO, x, strlen(x))); + else + ignore_return_value(write(STDOUT_FILENO, no_decomposition_message, sizeof(no_decomposition_message))); + + _Exit(EXIT_SUCCESS); +} + + + +int compute_max_bag_size_of_order(const ArrayIDIDFunc&order){ + auto inv_order = inverse_permutation(order); + int current_tail = -1; + int current_tail_up_deg = 0; + int max_up_deg = 0; + compute_chordal_supergraph( + chain(tail, inv_order), chain(head, inv_order), + [&](int x, int y){ + if(current_tail != x){ + current_tail = x; + max_to(max_up_deg, current_tail_up_deg); + current_tail_up_deg = 0; + } + ++current_tail_up_deg; + } + ); + return max_up_deg+1; +} + +const char*compute_decomposition_given_order(const ArrayIDIDFunc&order){ + ostringstream out; + print_tree_decompostion_of_order(out, tail, head, order); + char*buf = new char[out.str().length()+1]; + memcpy(buf, out.str().c_str(), out.str().length()+1); + return buf; +} + +void test_new_order(const ArrayIDIDFunc&order){ + int x = compute_max_bag_size_of_order(order); + { + if(x < best_bag_size){ + best_bag_size = x; + const char*old_decomposition = best_decomposition; + best_decomposition = compute_decomposition_given_order(order); + if(x < print_below) { + print_comment("outputing bagsize " + to_string(best_bag_size)); + ignore_return_value(write(STDOUT_FILENO, best_decomposition, strlen(best_decomposition))); + string terminator = "=\n"; + ignore_return_value(write(STDOUT_FILENO, terminator.data(), terminator.length())); + } + delete[]old_decomposition; + { + string msg = "c status "+to_string(best_bag_size)+" "+to_string(get_milli_time())+"\n"; + ignore_return_value(write(STDOUT_FILENO, msg.data(), msg.length())); + } + } + } +} + + + +int main(int argc, char*argv[]){ + signal(SIGTERM, signal_handler); + signal(SIGINT, signal_handler); + + signal(SIGSEGV, signal_handler); + try{ + { + string file_name = "-"; + if(argc == 2) + file_name = argv[1]; + auto g = uncached_load_pace_graph(file_name); + tail = std::move(g.tail); + head = std::move(g.head); + } + + int random_seed = 0; + print_below = 0; + + if(argc >= 3){ + if(string(argv[1]) == "-s"){ + random_seed = atoi(argv[2]); + } else if(string(argv[1]) == "-p"){ + print_below = atoi(argv[2]); + } + } + + if(argc >= 5){ + if(string(argv[3]) == "-s"){ + random_seed = atoi(argv[4]); + } else if(string(argv[3]) == "-p"){ + print_below = atoi(argv[4]); + } + } + + { + preorder = compute_preorder(compute_successor_function(tail, head)); + for(int i=0; i&multilevel_partition, bool must_print){ + + long long now = get_milli_time(); + + if(!must_print && now - last_print < 30000) + return; + last_print = now; + + int tw = get_treewidth_of_multilevel_partition(multilevel_partition); + { + +//cerr << "New" << endl; +//for(int i=0; i<(int)multilevel_partition.size(); ++i){ +// cerr << i << " : " << multilevel_partition[i].parent_cell << " :"; +// for(auto&y:multilevel_partition[i].separator_node_list) +// cerr << " " << y ; +// cerr << endl; +//} + + auto td = format_multilevel_partition_as_tree_decomposition(multilevel_partition); + + char*new_decomposition = new char[td.length()+1]; + memcpy(new_decomposition, td.c_str(), td.length()+1); + const char*old_decomposition = best_decomposition; + best_decomposition = new_decomposition; + best_bag_size = tw; + if(best_bag_size < print_below) { + print_comment("outputing bagsize " + to_string(best_bag_size)); + ignore_return_value(write(STDOUT_FILENO, best_decomposition, strlen(best_decomposition))); + string terminator = "=\n"; + ignore_return_value(write(STDOUT_FILENO, terminator.data(), terminator.length())); + } + delete[]old_decomposition; + } + print_comment("status "+to_string(best_bag_size)+" "+to_string(get_milli_time())); + }; + + + { + try{ + std::minstd_rand rand_gen; + rand_gen.seed(random_seed); + + if(node_count > 500000) + { + print_comment("start F1 with 0.1 min balance and edge_first"); + flow_cutter::Config config; + config.cutter_count = 1; + config.random_seed = rand_gen(); + config.min_small_side_size = 0.1; + config.max_cut_size = 500; + config.separator_selection = flow_cutter::Config::SeparatorSelection::edge_first; + compute_multilevel_partition(tail, head, flow_cutter::ComputeSeparator(config), best_bag_size, on_new_multilevel_partition); + } + +// { +// print_comment("start F1 with 0.1 min balance and node_first"); +// flow_cutter::Config config; +// config.cutter_count = 1; +// config.random_seed = rand_gen(); +// config.min_small_side_size = 0.1; +// config.max_cut_size = 5000; +// config.separator_selection = flow_cutter::Config::SeparatorSelection::node_first; +// compute_multilevel_partition(tail, head, flow_cutter::ComputeSeparator(config), best_bag_size, on_new_multilevel_partition); +// } + + +// { +// print_comment("start F3 with 0.05 min balance and node_first"); +// flow_cutter::Config config; +// config.cutter_count = 3; +// config.random_seed = rand_gen(); +// config.min_small_side_size = 0.05; +// config.max_cut_size = 5000; +// config.separator_selection = flow_cutter::Config::SeparatorSelection::node_first; +// compute_multilevel_partition(tail, head, flow_cutter::ComputeSeparator(config), best_bag_size, on_new_multilevel_partition); +// } + + +// { +// print_comment("start foo"); +// flow_cutter::Config config; +// config.cutter_count = 1; +// config.random_seed = rand_gen(); +// config.min_small_side_size = 0.1; +// config.max_cut_size = 5000; +// config.separator_selection = flow_cutter::Config::SeparatorSelection::node_first; + + +// auto top_sep_list = flow_cutter::ComputeSeparatorList(config)(tail, head); + +// top_sep_list.erase(top_sep_list.begin(), top_sep_list.end()-2); + +// for(auto&top_sep:top_sep_list){ +// print_comment("use top level sep "+std::to_string(top_sep.size())); +// auto default_compute_separator = flow_cutter::ComputeSeparator(config); +// auto compute_separator = [&](const ArrayIDIDFunc&tail, const ArrayIDIDFunc&head)->std::vector{ +// if(tail.image_count() == node_count){ +// print_comment("Used"); +// return top_sep; +// }else +// return default_compute_separator(tail, head); +// }; +// compute_multilevel_partition(tail, head, compute_separator, best_bag_size, on_new_multilevel_partition); +// } + +// } + + if(node_count < 50000){ + print_comment("min degree heuristic"); + test_new_order(chain(compute_greedy_min_degree_order(tail, head), inv_preorder)); + } + + if(node_count < 10000){ + print_comment("min shortcut heuristic"); + test_new_order(chain(compute_greedy_min_shortcut_order(tail, head), inv_preorder)); + } + + { + print_comment("run with 0.0/0.1/0.2 min balance and node_min_expansion in endless loop with varying seed"); + flow_cutter::Config config; + config.cutter_count = 1; + config.random_seed = rand_gen(); + config.max_cut_size = 10000; + config.separator_selection = flow_cutter::Config::SeparatorSelection::node_min_expansion; + + for(int i=2;;++i){ + config.random_seed = rand_gen(); + if(i % 16 == 0) + ++config.cutter_count; + + switch(i % 3){ + case 2: config.min_small_side_size = 0.2; break; + case 1: config.min_small_side_size = 0.1; break; + case 0: config.min_small_side_size = 0.0; break; + } + +// switch(i % 2){ +// case 1: config.separator_selection = flow_cutter::Config::SeparatorSelection::node_min_expansion; break; +// case 0: config.separator_selection = flow_cutter::Config::SeparatorSelection::node_first; break; +// } + +// print_comment("new run with F"+std::to_string(config.cutter_count)); + + compute_multilevel_partition(tail, head, flow_cutter::ComputeSeparator(config), best_bag_size, on_new_multilevel_partition); + } + } + }catch(...){ + } + + } + }catch(...){ + } + signal_handler(0); +} + diff --git a/solvers/flow-cutter-pace17/src/permutation.h b/solvers/flow-cutter-pace17/src/permutation.h new file mode 100644 index 0000000..c431684 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/permutation.h @@ -0,0 +1,48 @@ +#ifndef PERMUTATION_H +#define PERMUTATION_H + +#include "tiny_id_func.h" + +template +bool is_permutation(const IDIDFunc&f){ + if(f.preimage_count() != f.image_count()) + return false; + + int id_count = f.preimage_count(); + + BitIDFunc already_seen(id_count); + already_seen.fill(false); + for(int i=0; i= id_count) + return false; + if(already_seen(x)) + return false; + already_seen.set(x, true); + } + return true; +} + + + +template +ArrayIDIDFunc inverse_permutation(const IDIDFunc&f){ + assert(is_permutation(f)); + + int id_count = f.preimage_count(); + + ArrayIDIDFunc inv_f(id_count, id_count); + for(int i=0; i + +template +ArrayIDIDFunc compute_preorder(const Out&out){ + const int node_count = out.preimage_count(); + + ArrayIDIDFunc p(node_count, node_count); + + BitIDFunc seen(node_count); + seen.fill(false); + + typedef typename std::decay::type Iter; + ArrayIDFuncnext_out(node_count); + for(int i=0; i stack(node_count); + int stack_end = 0; + + int id = 0; + + for(int r=0; r +struct Range{ + Iter begin_, end_; + Iter begin()const{return begin_;} + Iter end()const{return end_;} +}; + +template +Range make_range(Iter begin, Iter end){ + return {begin, end}; +} + +#endif + diff --git a/solvers/flow-cutter-pace17/src/separator.h b/solvers/flow-cutter-pace17/src/separator.h new file mode 100644 index 0000000..d027e7d --- /dev/null +++ b/solvers/flow-cutter-pace17/src/separator.h @@ -0,0 +1,282 @@ +#ifndef SEPARATOR_H +#define SEPARATOR_H + +#include "node_flow_cutter.h" +#include "flow_cutter.h" +#include "flow_cutter_config.h" +#include "tiny_id_func.h" +#include "min_max.h" +#include "back_arc.h" + +namespace flow_cutter{ + + class ComputeSeparator{ + public: + explicit ComputeSeparator(Config config):config(config){} + + template + std::vector operator()(const Tail&tail, const Head&head)const{ + const int node_count = tail.image_count(); + const int arc_count = tail.preimage_count(); + + auto out_arc = invert_sorted_id_id_func(tail); + auto back_arc = compute_back_arc_permutation(tail, head); + + std::vectorseparator; + + if(node_count <= 2){ + separator = {0}; + return separator; // NVRO + } + + { + UnionFind uf(node_count); + for(int i=0; ideg(node_count, 0); +// for(int i=0; i 0); +// if(deg[i] == 1){ +// separator = {i}; +// return separator; // NVRO +// } +// } +// } + + + switch(config.separator_selection){ + case Config::SeparatorSelection::node_min_expansion: + { + + auto expanded_graph = expanded_graph::make_graph( + make_const_ref_id_id_func(tail), + make_const_ref_id_id_func(head), + make_const_ref_id_id_func(back_arc), + make_const_ref_id_func(out_arc) + ); + + auto cutter = make_simple_cutter(expanded_graph, config); + auto pairs = select_random_source_target_pairs(node_count, config.cutter_count, config.random_seed); + + double best_score = std::numeric_limits::max(); + + cutter.init(expanded_graph::expand_source_target_pair_list(pairs), config.random_seed); + for(;;){ + + double cut_size = cutter.get_current_cut().size(); + double small_side_size = cutter.get_current_smaller_cut_side_size(); + + double score = cut_size / small_side_size; + + if(cutter.get_current_smaller_cut_side_size() < config.min_small_side_size * expanded_graph::expanded_node_count(node_count)) + score += 1000000; + + + if(score < best_score){ + best_score = score; + separator = expanded_graph::extract_original_separator(tail, head, cutter).sep; + if(separator.size() > config.max_cut_size) + break; + } + + double potential_best_next_score = (double)(cut_size+1)/(double)(expanded_graph::expanded_node_count(node_count)/2); + if(potential_best_next_score >= best_score) + break; + + if(!cutter.advance()) + break; + + } + } + break; + case Config::SeparatorSelection::edge_min_expansion: + { + auto graph = flow_cutter::make_graph( + make_const_ref_id_id_func(tail), + make_const_ref_id_id_func(head), + make_const_ref_id_id_func(back_arc), + ConstIntIDFunc<1>(arc_count), + make_const_ref_id_func(out_arc) + ); + + auto cutter = make_simple_cutter(graph, config); + + std::vectorbest_cut; + double best_score = std::numeric_limits::max(); + + cutter.init(select_random_source_target_pairs(node_count, config.cutter_count, config.random_seed), config.random_seed); + + for(;;){ + + double cut_size = cutter.get_current_cut().size(); + double small_side_size = cutter.get_current_smaller_cut_side_size(); + + double score = cut_size / small_side_size; + + if(cutter.get_current_smaller_cut_side_size() < config.min_small_side_size * node_count) + score += 1000000; + + + if(score < best_score){ + best_score = score; + best_cut = cutter.get_current_cut(); + if(best_cut.size() > config.max_cut_size) + break; + } + + double potential_best_next_score = (double)(cut_size+1)/(double)(expanded_graph::expanded_node_count(node_count)/2); + if(potential_best_next_score >= best_score) + break; + + if(!cutter.advance()) + break; + + } + + for(auto x:best_cut) + separator.push_back(head(x)); + + std::sort(separator.begin(), separator.end()); + separator.erase(std::unique(separator.begin(), separator.end()), separator.end()); + } + break; + case Config::SeparatorSelection::edge_first: + { + auto graph = flow_cutter::make_graph( + make_const_ref_id_id_func(tail), + make_const_ref_id_id_func(head), + make_const_ref_id_id_func(back_arc), + ConstIntIDFunc<1>(arc_count), + make_const_ref_id_func(out_arc) + ); + + auto cutter = make_simple_cutter(graph, config); + cutter.init(select_random_source_target_pairs(node_count, config.cutter_count, config.random_seed), config.random_seed); + while(cutter.get_current_smaller_cut_side_size() < config.min_small_side_size * node_count && cutter.get_current_cut().size() <= config.max_cut_size) + if(!cutter.advance()) + break; + + for(auto x:cutter.get_current_cut()) + separator.push_back(head(x)); + + std::sort(separator.begin(), separator.end()); + separator.erase(std::unique(separator.begin(), separator.end()), separator.end()); + } + break; + case Config::SeparatorSelection::node_first: + { + auto expanded_graph = expanded_graph::make_graph( + make_const_ref_id_id_func(tail), + make_const_ref_id_id_func(head), + make_const_ref_id_id_func(back_arc), + make_const_ref_id_func(out_arc) + ); + + auto cutter = make_simple_cutter(expanded_graph, config); + auto pairs = select_random_source_target_pairs(node_count, config.cutter_count, config.random_seed); + + cutter.init(expanded_graph::expand_source_target_pair_list(pairs), config.random_seed); + while(cutter.get_current_smaller_cut_side_size() < config.min_small_side_size * expanded_graph::expanded_node_count(node_count) && cutter.get_current_cut().size() <= config.max_cut_size) + if(!cutter.advance()) + break; + + separator = expanded_graph::extract_original_separator(tail, head, cutter).sep; + } + break; + default: + assert(false); + separator = {}; + } + return separator; // NVRO + + } + private: + Config config; + }; + + + + class ComputeSeparatorList{ + public: + explicit ComputeSeparatorList(Config config):config(config){} + + template + std::vector> operator()(const Tail&tail, const Head&head)const{ + const int node_count = tail.image_count(); + const int arc_count = tail.preimage_count(); + + auto out_arc = invert_sorted_id_id_func(tail); + auto back_arc = compute_back_arc_permutation(tail, head); + + std::vector>separator_list; + + if(node_count <= 2){ + separator_list.push_back({0}); + return separator_list; // NVRO + } + + { + UnionFind uf(node_count); + for(int i=0; i 4*node_count/10 && cutter.get_current_cut().size() > 2*separator_list.back().size()) + break; + + if(cutter.get_current_cut().size() > config.max_cut_size) + break; + + if(!cutter.advance()) + break; + } + } + + return separator_list; // NVRO + + } + private: + Config config; + }; +} + +#endif + diff --git a/solvers/flow-cutter-pace17/src/sort_arc.h b/solvers/flow-cutter-pace17/src/sort_arc.h new file mode 100644 index 0000000..88cac20 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/sort_arc.h @@ -0,0 +1,38 @@ +#ifndef SORT_ARC_H +#define SORT_ARC_H + +#include "id_sort.h" +#include "array_id_func.h" +#include "permutation.h" +#include "count_range.h" +#include + +template +ArrayIDIDFunc sort_arcs_first_by_tail_second_by_head(const Tail&tail, const Head&head){ + assert(tail.preimage_count() == head.preimage_count()); + assert(tail.image_count() == head.image_count()); + + const int arc_count = tail.preimage_count(); + + ArrayIDIDFunc + x(arc_count, arc_count), + y(arc_count, arc_count); + + stable_sort_copy_by_id( + CountIterator{0}, CountIterator{arc_count}, + y.begin(), + head.image_count(), + head + ); + stable_sort_copy_by_id( + y.begin(), y.end(), + x.begin(), + tail.image_count(), + tail + ); + + return x; //NVRO +} + +#endif + diff --git a/solvers/flow-cutter-pace17/src/tiny_id_func.h b/solvers/flow-cutter-pace17/src/tiny_id_func.h new file mode 100644 index 0000000..3975138 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/tiny_id_func.h @@ -0,0 +1,155 @@ +#ifndef TINY_INT_ID_FUNC_H +#define TINY_INT_ID_FUNC_H + +#include +#include +#include "id_func.h" +#include "array_id_func.h" + +template +struct TinyIntIDFunc{ +private: + static constexpr int entry_count_per_uint64 = 64/bit_count; + static constexpr std::uint64_t entry_mask = (std::uint64_t(1)< + TinyIntIDFunc(const IDFunc&other, typename std::enable_if::value, void>::type*dummy=0) + :preimage_(other.preimage_count()), data_((other.preimage_count() + entry_count_per_uint64 - 1) / entry_count_per_uint64){ + for(int i=0; i> offset) & entry_mask; + } + + void set(int id, std::uint64_t value){ + assert(0 <= id && id < preimage_ && "id out of bounds"); + assert(0 <= value && value <= entry_mask && "value out of bounds"); + + int index = id / entry_count_per_uint64; + int offset = (id % entry_count_per_uint64)*bit_count; + + data_[index] ^= ((((data_[index] >> offset) & entry_mask) ^ value) << offset); + } + + void fill(std::uint64_t value){ + assert(0 <= value && value <= entry_mask && "value out of bounds"); + + if(bit_count == 1){ + if(value == false) + data_.fill(0); + else + data_.fill((std::uint64_t)-1); + }else if(value == 0){ + data_.fill(0); + }else{ + std::uint64_t pattern = value; + int shift = bit_count; + while(shift < 64){ + pattern |= pattern << shift; + shift <<= 1; + } + data_.fill(pattern); + } + } + + std::uint64_t move(int id){ + return operator()(id); + } + + void swap(TinyIntIDFunc&other)noexcept{ + std::swap(preimage_, other.preimage_); + data_.swap(other.data_); + } + + template + TinyIntIDFunc operator=(const typename std::enable_if::value, IDFunc>::type & other){ + TinyIntIDFunc(other).swap(*this); + return *this; + } + + + int preimage_; + ArrayIDFunc data_; +}; + + + +typedef TinyIntIDFunc<1> BitIDFunc; + +inline BitIDFunc operator~(BitIDFunc f){ + for(int i=0; i +#include + +template +bool is_tree(const Neighbors&neighbors){ + int node_count = neighbors.preimage_count(); + if(node_count <= 1) + return true; + + int reachable_node_count = 0; + + ArrayIDFuncparent(node_count); + parent.fill(-1); + ArrayIDFuncstack(node_count); + int stack_end = 0; + stack[stack_end++] = 0; + while(stack_end != 0){ + ++reachable_node_count; + auto x = stack[--stack_end]; + for(auto y:neighbors(x)){ + if(parent(x) != y){ + if(parent(y) == -1){ + parent[y] = x; + stack[stack_end++] = y; + }else{ + return false; + } + } + } + } + return reachable_node_count == node_count; +} + +#endif + diff --git a/solvers/flow-cutter-pace17/src/tree_decomposition.cpp b/solvers/flow-cutter-pace17/src/tree_decomposition.cpp new file mode 100644 index 0000000..3a02255 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/tree_decomposition.cpp @@ -0,0 +1,208 @@ +#include "tree_decomposition.h" +#include "io_helper.h" +#include +#include +#include +#include +#include "heap.h" +#include "tiny_id_func.h" +#include "contraction_graph.h" +#include "id_multi_func.h" +using namespace std; + +void print_tree_decompostion_of_order(std::ostream&out, ArrayIDIDFunc tail, ArrayIDIDFunc head, const ArrayIDIDFunc&order){ + const int node_count = tail.image_count(); + + auto inv_order = inverse_permutation(order); + tail = chain(tail, inv_order); + head = chain(head, inv_order); + + vector>nodes_in_bag; + ArrayIDFunc>bags_of_node(node_count); + + auto is_left_subset_of_right = [](const std::vector&l, const std::vector&r){ + auto i = l.begin(), j = r.begin(); + + for(;;){ + if(i == l.end()) + return true; + if(j == r.end()) + return false; + + if(*i < *j) + return false; + if(*i == *j) + ++i; + ++j; + } + }; + + auto compute_intersection_size = [](const std::vector&l, const std::vector&r){ + auto i = l.begin(), j = r.begin(); + int n = 0; + for(;;){ + if(i == l.end() || j == r.end()) + return n; + if(*i < *j) + ++i; + else if(*i > *j) + ++j; + else{ + ++i; + ++j; + ++n; + } + } + }; + + auto on_new_potential_maximal_clique = [&](int lowest_node_in_clique, std::vectorclique){ + for(auto b:bags_of_node(lowest_node_in_clique)) + if(is_left_subset_of_right(clique, nodes_in_bag[b])) + return; + int bag_id = nodes_in_bag.size(); + for(auto x:clique) + bags_of_node[x].push_back(bag_id); + nodes_in_bag.push_back(move(clique)); + }; + + + { + BitIDFunc is_root(node_count); + is_root.fill(true); + std::vectorupper_neighborhood_of_z; + int z = -1; + compute_chordal_supergraph( + tail, head, + [&](int x, int y){ + is_root.set(x, false); + if(z != -1 && z != x){ + upper_neighborhood_of_z.push_back(z); + sort(upper_neighborhood_of_z.begin(), upper_neighborhood_of_z.end()); + on_new_potential_maximal_clique(z, move(upper_neighborhood_of_z)); + upper_neighborhood_of_z.clear(); + } + z = x; + upper_neighborhood_of_z.push_back(y); + } + ); + if(z != -1){ + upper_neighborhood_of_z.push_back(z); + sort(upper_neighborhood_of_z.begin(), upper_neighborhood_of_z.end()); + on_new_potential_maximal_clique(z, move(upper_neighborhood_of_z)); + } + + for(int x=0; x maximum_bag_size) + maximum_bag_size = b.size(); + + out << "s td "<< nodes_in_bag.size() << ' ' << maximum_bag_size << ' ' << node_count << '\n'; + for(int i=0; itail, head, weight; + + for(int b=0; bneighbor_bags; + for(auto x:nodes_in_bag[b]){ + vectortmp; + std::set_union( + bags_of_node[x].begin(), bags_of_node[x].end(), + neighbor_bags.begin(), neighbor_bags.end(), + std::back_inserter(tmp) + ); + neighbor_bags.swap(tmp); + } + for(auto p:neighbor_bags){ + if(p != b){ + tail.push_back(b); + head.push_back(p); + weight.push_back(compute_intersection_size(nodes_in_bag[b], nodes_in_bag[p])); + } + } + } + + int arc_count = tail.size(); + + auto out_arc = invert_id_id_func( + id_id_func( + arc_count, bag_count, + [&](unsigned a){return tail[a];} + ) + ); + + BitIDFunc in_tree(bag_count); + in_tree.fill(false); + max_id_heapq(arc_count); + + for(int b=0; b&cell_list){ + int bag_count = cell_list.size(); + + out << "s td "<< bag_count << ' ' << get_treewidth_of_multilevel_partition(cell_list) << ' ' << get_node_count_of_multilevel_partition(cell_list) << '\n'; + + for(int i=0; i +#include + +void print_tree_decompostion_of_order(std::ostream&out, ArrayIDIDFunc tail, ArrayIDIDFunc head, const ArrayIDIDFunc&order); +void print_tree_decompostion_of_multilevel_partition(std::ostream&out, const ArrayIDIDFunc&tail, const ArrayIDIDFunc&head, const ArrayIDIDFunc&to_input_node_id, const std::vector&cell_list); + +#endif diff --git a/solvers/flow-cutter-pace17/src/tree_node_ranking.h b/solvers/flow-cutter-pace17/src/tree_node_ranking.h new file mode 100644 index 0000000..3d28a1a --- /dev/null +++ b/solvers/flow-cutter-pace17/src/tree_node_ranking.h @@ -0,0 +1,205 @@ +#ifndef TREE_NODE_RANKING_H +#define TREE_NODE_RANKING_H +#include "array_id_func.h" +#include "count_range.h" +#include "id_sort.h" +#include + +//! input graph must be a symmetric graph +//! The result is guaranteed to be optimal for trees. For non-trees there are no guarantees. +template +ArrayIDIDFunc compute_tree_node_ranking(const Neighbors&neighbors){ + const int node_count = neighbors.preimage_count(); + + ArrayIDFunc + parent(node_count), + child_first_order(node_count); + + // Root tree at node 0 + { + ArrayIDFuncstack(node_count); + int stack_end = 0; + int next_order_pos = node_count; + + stack[stack_end++] = 0; + parent.fill(-1); + parent[0] = -2; + + + while(stack_end != 0){ + auto x = stack[--stack_end]; + child_first_order[--next_order_pos] = x; + for(auto y:neighbors(x)){ + if(parent(y) == -1){ + assert(y != 0); + stack[stack_end++] = y; + parent[y] = x; + } + } + } + parent[0] = -1; + assert(next_order_pos == 0); + } + + + ArrayIDIDFunc level(node_count, 1); + + // Compute node levels + { + struct SubTreeInfo{ + std::vectorcritical_list; + int size; + }; + ArrayIDFunc>node_children_info(node_count); + + BitIDFunc crit(node_count); + crit.fill(false); + + for(int i=0; i max) + max = t; + }else{ + if(t > p) + p = t; + } + + auto&&first_child_critical_list = children_info[0].critical_list; + + for(int i = first_child_critical_list.size()-1; i>=0; --i){ + int t = first_child_critical_list[i]; + + if(t > max) + break; + if(p >= t) + continue; + + if(!crit(t)){ + crit.set(t, true); + }else{ + p = t; + } + } + + for(int i=0; i<=p; ++i) + crit.set(i, false); + for(int i=p+1; i<=max; ++i){ + if(!crit(i)){ + if(q == 0){ + q = i; + tree_info.critical_list = {q}; + } + }else{ + crit.set(i, false); + if(q != 0) + tree_info.critical_list.push_back(i); + } + } + + if(q == 0){ + + while(!first_child_critical_list.empty() && first_child_critical_list.back() <= max) + first_child_critical_list.pop_back(); + q = max+1; + while(!first_child_critical_list.empty() && first_child_critical_list.back() == q){ + first_child_critical_list.pop_back(); + ++q; + } + + tree_info.critical_list = std::move(first_child_critical_list); + tree_info.critical_list.push_back(q); + }else{ + while(!first_child_critical_list.empty() && first_child_critical_list.back() <= max) + first_child_critical_list.pop_back(); + assert(std::is_sorted(tree_info.critical_list.begin(), tree_info.critical_list.end())); + for(int i=tree_info.critical_list.size()-1; i>=0; --i) + first_child_critical_list.push_back(tree_info.critical_list[i]); + tree_info.critical_list = std::move(first_child_critical_list); + } + + assert(q > 0); + + if(level.image_count() < q) + level.set_image_count(q); + level[x] = q-1; + } + + assert(std::is_sorted(tree_info.critical_list.begin(), tree_info.critical_list.end(), std::greater())); + + if(parent(x) != -1) + node_children_info[parent(x)].push_back(std::move(tree_info)); + } + } + + return level; // NVRO +} + +#endif + diff --git a/solvers/flow-cutter-pace17/src/union_find.h b/solvers/flow-cutter-pace17/src/union_find.h new file mode 100644 index 0000000..7c12217 --- /dev/null +++ b/solvers/flow-cutter-pace17/src/union_find.h @@ -0,0 +1,89 @@ +#ifndef UNION_FIND_H +#define UNION_FIND_H + +#include "array_id_func.h" +#include + +//! An id-id-function that maps a node onto its components representative +struct UnionFind{ +public: + UnionFind():node_count_(0){} + + explicit UnionFind(int node_count):parent_(node_count), node_count_(node_count), component_count_(node_count){ + parent_.fill(-1); + } + + void reset(){ + parent_.fill(-1); + component_count_ = node_count_; + } + + int preimage_count()const{return node_count_;} + int image_count()const{return node_count_;} + + void unite(int l, int r){ + assert(0 <= l && l < node_count_); + assert(0 <= r && r < node_count_); + + l = operator()(l); + r = operator()(r); + if(l != r){ + --component_count_; + if(-parent_[l] < -parent_[r]){ + parent_[r] += parent_[l]; + parent_[l] = r; + }else{ + parent_[l] += parent_[r]; + parent_[r] = l; + } + } + } + + int operator()(int x)const{ + assert(0 <= x && x < node_count_); + + if(is_representative(x)) + return x; + + int y = x; + while(!is_representative(y)) + y = parent_[y]; + + int z = x; + while(!is_representative(z)){ + int tmp = parent_[z]; + parent_[z] = y; + z = tmp; + } + + return y; + } + + bool in_same(int x, int y)const{ + return (*this)(x) == (*this)(y); + } + + bool is_representative(int v)const{ + assert(0 <= v && v < node_count_); + return parent_(v) < 0; + } + + int component_size(int v)const{ + assert(0 <= v && v < node_count_); + if(is_representative(v)) + return -parent_(v); + else + return 0; + } + + int component_count()const{ + return component_count_; + } + +private: + mutable ArrayIDFuncparent_; + int node_count_; + int component_count_; +}; + +#endif diff --git a/solvers/htd-master/.gitignore b/solvers/htd-master/.gitignore new file mode 100644 index 0000000..a6dd3fb --- /dev/null +++ b/solvers/htd-master/.gitignore @@ -0,0 +1,6 @@ +include/htd/CompilerDetection.hpp +include/htd/Id.hpp +include/htd/PreprocessorDefinitions.hpp +include/htd_io/PreprocessorDefinitions.hpp +include/htd_cli/PreprocessorDefinitions.hpp +src/htd/AssemblyInfo.cpp diff --git a/solvers/htd-master/.travis.yml b/solvers/htd-master/.travis.yml new file mode 100644 index 0000000..c0a407b --- /dev/null +++ b/solvers/htd-master/.travis.yml @@ -0,0 +1,97 @@ +# Build matrix / environment variable are explained on: +# http://about.travis-ci.org/docs/user/build-configuration/ +# This file can be validated on: +# http://lint.travis-ci.org/ + +osx_image: xcode8.1 + +install: +- export DEPENDENCIES_DIRECTORY="${TRAVIS_BUILD_DIR}/dependencies" +- mkdir ${DEPENDENCIES_DIRECTORY} && cd ${DEPENDENCIES_DIRECTORY} +- if [ "$CXX" = "g++" ] && [ "$TRAVIS_OS_NAME" = "linux" ]; then export CXX="g++-4.9" CC="gcc-4.9"; fi +- if [ "$CXX" = "clang++" ] && [ "$TRAVIS_OS_NAME" = "linux" ]; then export CXX="clang++-3.7" CC="clang-3.7"; fi +- | + if [[ "${TRAVIS_OS_NAME}" == "linux" ]]; then + CMAKE_URL="http://www.cmake.org/files/v3.2/cmake-3.2.0-Linux-x86_64.tar.gz" + mkdir cmake && travis_retry wget --no-check-certificate --quiet -O - ${CMAKE_URL} | tar --strip-components=1 -xz -C cmake + export PATH=${DEPENDENCIES_DIRECTORY}/cmake/bin:${PATH} + fi +- | + if [[ "${TRAVIS_OS_NAME}" == "osx" ]]; then + brew update + brew install cmake + fi +- | + if [[ "$RUN_COVERAGE_TEST" == "1" ]]; then + mkdir -p "${DEPENDENCIES_DIRECTORY}/gcov/bin" + export PATH=${DEPENDENCIES_DIRECTORY}/gcov/bin:${PATH} + ln -fs /usr/bin/gcov-4.9 "${DEPENDENCIES_DIRECTORY}/gcov/bin/gcov" && gcov --version + pip install --user requests[security] + pip install --user cpp-coveralls + fi +- cd ${TRAVIS_BUILD_DIR} +- echo ${PATH} +- echo ${CXX} +- ${CXX} --version +- ${CXX} -v + +addons: + apt: + # List of whitelisted in travis packages for ubuntu-precise can be found here: + # https://github.com/travis-ci/apt-package-whitelist/blob/master/ubuntu-precise + # List of whitelisted in travis apt-sources: + # https://github.com/travis-ci/apt-source-whitelist/blob/master/ubuntu.json + sources: + - ubuntu-toolchain-r-test + - llvm-toolchain-precise-3.7 + packages: + - gcc-4.9 + - g++-4.9 + - clang-3.7 + - time + - valgrind + +os: + - linux + - osx + +language: cpp + +compiler: + - gcc + - clang + +before_script: + - chmod +x travis.sh + +script: ./travis.sh + +after_success: + - | + if [[ "$RUN_COVERAGE_TEST" == "1" ]]; then + cd ${TRAVIS_BUILD_DIR} + coveralls --exclude "/usr" --exclude "test" --exclude "build/htd/CMakeFiles" --exclude "dependencies" --gcov-options '\-lbp' + fi + +after_failure: + - | + if [ -f "${TRAVIS_BUILD_DIR}/build/${HTD_TARGET}/Testing/Temporary/LastTest.log" ]; then + cat "${TRAVIS_BUILD_DIR}/build/${HTD_TARGET}/Testing/Temporary/LastTest.log" + fi + +env: + - HTD_TARGET=htd BUILD_SHARED_LIBS=True CMAKE_BUILD_TYPE=Debug HTD_USE_EXTENDED_IDENTIFIERS=True + - HTD_TARGET=htd BUILD_SHARED_LIBS=True CMAKE_BUILD_TYPE=Debug HTD_USE_EXTENDED_IDENTIFIERS=False + - HTD_TARGET=htd BUILD_SHARED_LIBS=False CMAKE_BUILD_TYPE=Debug HTD_USE_EXTENDED_IDENTIFIERS=True + - HTD_TARGET=htd BUILD_SHARED_LIBS=False CMAKE_BUILD_TYPE=Debug HTD_USE_EXTENDED_IDENTIFIERS=False + +matrix: + include: + - os: linux + compiler: gcc + env: HTD_TARGET=htd BUILD_SHARED_LIBS=True CMAKE_BUILD_TYPE=Debug HTD_USE_EXTENDED_IDENTIFIERS=False RUN_COVERAGE_TEST=1 + +notifications: + email: false + +sudo: false diff --git a/solvers/htd-master/CMakeLists.txt b/solvers/htd-master/CMakeLists.txt new file mode 100644 index 0000000..2b0ab33 --- /dev/null +++ b/solvers/htd-master/CMakeLists.txt @@ -0,0 +1,104 @@ +cmake_minimum_required(VERSION 3.2) + +project(htd) + +set(CPACK_GENERATOR "STGZ;TGZ;TZ;ZIP") +set(CPACK_SOURCE_GENERATOR "STGZ;TGZ;TZ;ZIP") +set(CPACK_PACKAGE_DESCRIPTION_SUMMARY "A small but efficient C++ library for computing (customized) tree and hypertree decompositions.") +set(CPACK_PACKAGE_VENDOR "Michael Abseher (abseher@dbai.tuwien.ac.at)") +set(CPACK_PACKAGE_DESCRIPTION_FILE "${CMAKE_CURRENT_SOURCE_DIR}/README.md") +set(CPACK_RESOURCE_FILE_LICENSE "${CMAKE_CURRENT_SOURCE_DIR}/LICENSE.txt") +set(CPACK_PACKAGE_VERSION_MAJOR "1") +set(CPACK_PACKAGE_VERSION_MINOR "2") +set(CPACK_PACKAGE_VERSION_PATCH "0") + +include(CPack) +include(CTest) +include(CheckIncludeFileCXX) +include(WriteCompilerDetectionHeader) + +find_package(Git) +if(GIT_FOUND) + execute_process(COMMAND ${GIT_EXECUTABLE} update-index --refresh WORKING_DIRECTORY "${PROJECT_SOURCE_DIR}") + execute_process(COMMAND ${GIT_EXECUTABLE} describe --tags --always WORKING_DIRECTORY "${PROJECT_SOURCE_DIR}" RESULT_VARIABLE HTD_GIT_RETURN_VALUE OUTPUT_VARIABLE HTD_GIT_CURRENT_COMMIT_ID) + if(NOT ${HTD_GIT_RETURN_VALUE} EQUAL 0) + set(HTD_GIT_COMMIT_ID "") + message(WARNING "Could not derive Git commit ID. Build will not contain Git revision info.") + endif() + + string(REPLACE "\n" "" HTD_GIT_COMMIT_ID "${HTD_GIT_CURRENT_COMMIT_ID}") +else() + set(HTD_GIT_COMMIT_ID "") + message(WARNING "Could not find Git installation. Build will not contain Git revision info.") +endif() + +if(NOT CMAKE_BUILD_TYPE) + set(CMAKE_BUILD_TYPE Release CACHE STRING + "Please choose the type of build. Options are: None Debug Release RelWithDebInfo MinSizeRel." + FORCE ) +endif() + +if(NOT DEFINED BUILD_SHARED_LIBS) + set(BUILD_SHARED_LIBS ON) +endif() + +if(DEFINED HTD_DEBUG_OUTPUT) + if(HTD_DEBUG_OUTPUT) + message("Debugging output is enabled!") + + set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DHTD_DEBUG_OUTPUT" ) + endif() +endif() + +if(DEFINED HTD_USE_EXTENDED_IDENTIFIERS) + if(HTD_USE_EXTENDED_IDENTIFIERS) + message("Extended identifiers will be used!") + + set(HTD_ID_TYPE "std::size_t" ) + else() + set(HTD_ID_TYPE "std::uint_least32_t" ) + endif() +else() + set(HTD_ID_TYPE "std::uint_least32_t" ) +endif() + +if(NOT DEFINED BUILD_TESTING) + message("Tests are disabled!") + + set(BUILD_TESTING OFF) +endif() + +subdirs(src/htd) +subdirs(src/htd_io) +subdirs(src/htd_cli) +subdirs(src/htd_main) + +subdirs(test) + +configure_file( + "${CMAKE_CURRENT_SOURCE_DIR}/cmake/templates/cmake_uninstall.cmake.in" + "${CMAKE_CURRENT_BINARY_DIR}/cmake_uninstall.cmake" + IMMEDIATE @ONLY) + +find_package(Doxygen) +if(DOXYGEN_FOUND) + configure_file("${CMAKE_CURRENT_SOURCE_DIR}/cmake/templates/Doxyfile.in" + "${CMAKE_CURRENT_BINARY_DIR}/Doxyfile" @ONLY) + + add_custom_target(doc + ${DOXYGEN_EXECUTABLE} ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile + WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR} + COMMENT "Generate API documentation" VERBATIM +) +endif(DOXYGEN_FOUND) + +if(MSVC) + set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_DEBUG "${CMAKE_BINARY_DIR}/build/debug") + set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_RELEASE "${CMAKE_BINARY_DIR}/build/release") + set(CMAKE_LIBRARY_OUTPUT_DIRECTORY_DEBUG "${CMAKE_BINARY_DIR}/build/debug") + set(CMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE "${CMAKE_BINARY_DIR}/build/release") + set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY_DEBUG "${CMAKE_BINARY_DIR}/build/debug") + set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY_RELEASE "${CMAKE_BINARY_DIR}/build/release") +endif(MSVC) + +add_custom_target(uninstall COMMAND ${CMAKE_COMMAND} -P ${CMAKE_CURRENT_BINARY_DIR}/cmake_uninstall.cmake) diff --git a/solvers/htd-master/FORMATS.md b/solvers/htd-master/FORMATS.md new file mode 100644 index 0000000..d0e6b7c --- /dev/null +++ b/solvers/htd-master/FORMATS.md @@ -0,0 +1,38 @@ +# SUPPORTED FILE FORMATS + +## Input Formats + +**htd** supports the following input file formats: + +* gr: + + The input file format of the 1st Parameterized Algorithms and Computational Experiments Challenge. + For more information see [https://pacechallenge.wordpress.com/track-a-treewidth/](https://pacechallenge.wordpress.com/track-a-treewidth/). + +* lp: + + This format is inspired by logic-programming. Vertices of the given input graph are defined via facts of + format `vertex()` and edges are specified using facts of format `edge(, )`. + +* hgr: + + Similar to format 'gr' where hyperedges (edges with more than two end-points) are allowed. + +## Output Formats + +**htd** supports the following input file formats: + +* td: + + The output file format of the 1st Parameterized Algorithms and Computational Experiments Challenge. + For more information see [https://pacechallenge.wordpress.com/track-a-treewidth/](https://pacechallenge.wordpress.com/track-a-treewidth/). + +* human: + + Print the decomposition in an human-readable format. + +* width: + + Print only the maximum bag size of the computed decomposition. + +(Note that output is written to `stdout`!) diff --git a/solvers/htd-master/INSTALL.md b/solvers/htd-master/INSTALL.md new file mode 100644 index 0000000..c754ca7 --- /dev/null +++ b/solvers/htd-master/INSTALL.md @@ -0,0 +1,53 @@ +# COMPILE + +## Dependencies + +**htd** has the following compilation dependencies: + +* A recent C++11 compiler + * [g++](https://gcc.gnu.org/) + * [Clang](http://clang.llvm.org/) + * [Visual Studio](https://www.visualstudio.com/) + * or some other compiler of your choice +* [CMake](http://cmake.org/) for generating the build scripts +* [Doxygen](www.doxygen.org/) for generating the documentation + +**htd** was compiled and tested with the following versions of the dependencies mentioned above: + +* g++ 4.9.2 +* Clang 3.4 +* Visual Studio 2015 +* CMake 3.2 +* Doxygen 1.8.6 + +Unless the developers of the dependencies introduce incompatible changes, compiling **htd** with later versions of the dependencies should work too. (Older versions might work if you're lucky.) + +## Compilation + +### UNIX + +For the actual compilation step of **htd** just run `cmake ` (you may want to select a desired *CMAKE_INSTALL_PREFIX* to choose the installation directory) and `make` in a directory of your choice. Via the commands `make test` and `make doc` you can run the test cases shipped with **htd** and create the API documentation of **htd** after the compilation step was finished. + +### Windows + +To generate the necessary project configuration for Visual Studio, run `cmake -G "Visual Studio 14 2015" -DCMAKE_CONFIGURATION_TYPES="Debug;Release" `. Afterwards you can use the Visual Studio C++ compiler to build **htd**. + +# INSTALL + +### UNIX + +After compiling the library, you can install it as well as the front-end application **htd_main** and all required headers via `make install`. (Note that you must not delete the file `install_manifest.txt` generated in this step, because otherwise uninstalling **htd** cannot be done in an automated way any more.) + +### Windows + +Currently, **htd** cannot be installed via a single command. As a workaround you can copy the relevant binaries to a directory of your choice. + +# UNINSTALL + +### UNIX + +If you decide to uninstall the library, you can easily do that via the command `make uninstall`. (If you deleted the file `install_manifest.txt` required for automated uninstallation it should also work to delete all folders related to **htd** in *CMAKE_INSTALL_PREFIX* manually.) + +### Windows + +Currently, **htd** cannot be uninstalled via a single command. As a workaround you can delete the generated and/or copied files manually. diff --git a/solvers/htd-master/LICENSE.txt b/solvers/htd-master/LICENSE.txt new file mode 100644 index 0000000..733c072 --- /dev/null +++ b/solvers/htd-master/LICENSE.txt @@ -0,0 +1,675 @@ + GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + {one line to give the program's name and a brief idea of what it does.} + Copyright (C) {year} {name of author} + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + {project} Copyright (C) {year} {fullname} + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +. + diff --git a/solvers/htd-master/README.md b/solvers/htd-master/README.md new file mode 100644 index 0000000..b07d82c --- /dev/null +++ b/solvers/htd-master/README.md @@ -0,0 +1,331 @@ +# htd + +[![License](http://img.shields.io/badge/license-GPLv3-blue.svg)](https://www.gnu.org/copyleft/gpl.html) +[![Build Status Travis-CI](https://travis-ci.org/mabseher/htd.svg?branch=master)](https://travis-ci.org/mabseher/htd) +[![Build Status AppVeyor](https://ci.appveyor.com/api/projects/status/9pam14xyi946p21u/branch/master?svg=true)](https://ci.appveyor.com/project/mabseher/htd/branch/master) +[![Coverity Status](https://scan.coverity.com/projects/8163/badge.svg)](https://scan.coverity.com/projects/mabseher-htd) +[![Code Coverage Status](https://coveralls.io/repos/github/mabseher/htd/badge.svg?branch=master&update=1)](https://coveralls.io/github/mabseher/htd?branch=master) + +**A small but efficient C++ library for computing (customized) tree and hypertree decompositions.** + +**htd** is a library which does not only compute tree decompositions, it also allows to fully customize them via (custom) manipulations, labelings and optimize them based on a user-provided fitness function. The library provides efficient implementations of well-established techniques for computing tree decompositions (like bucket-elimination based on a vertex elimination ordering of the input graph) and it is optimized for large (hyper)graphs. At the current stage, **htd** is able to decompose graphs containing millions of vertices and several hundreds of thousands of (hyper)edges efficiently. + +For almost each class used in the library, **htd** provides an interface and a corresponding factory class allowing to directly influence the process of generating decompositions without having to re-invent the wheel at any place. That is, if one for instance develops a new heuristic for vertex elimination orderings there is absolutely no need to change anything in the remainder of the library in order to test its influence on the quality of bucket-elimination. (In fact, one does not even need to re-compile **htd** for such a modification as long as all interfaces are implemented properly by the new algorithm and as long as the algorithm is made available to **htd** via the corresponding factory class.) + +## BUILD PROCESS + +For instructions about the build process, please read the `INSTALL.md` file. + +## USAGE + +### Using htd via command-line interface + +For using **htd** via the command-line interface there is the front-end application **htd_main**. **htd_main** provides access to basic functionality of **htd**. + +A program call for **htd_main** is of the following form: + +`./htd_main [-h] [-v] [-s ] [--type ] [--input ] [--instance ] [--output ] [--print-progress] [--strategy ] [--preprocessing ] [--triangulation-minimization] [--opt ] [--iterations ] [--patience ] < $FILE` + +Options are organized in the following groups: + +* General Options: + * `--help, -h : Print usage information and exit.` + * `--version, -v : Print version information and exit.` + * `--seed, -s : Set the seed for the random number generator to .` + +* Decomposition Options: + * `--type : Compute a graph decomposition of type .` + * `Permitted Values:` + * `.) tree : Compute a tree decomposition of the input graph. (default)` + * `.) hypertree : Compute a hypertree decomposition of the input graph.` + +* Input-Specific Options: + * `--input : Assume that the input graph is given in format .` + * `Permitted Values:` + * `.) gr : Use the input format 'gr'. (default)` + * `.) lp : Use the input format 'lp'.` + * `.) hgr : Use the input format 'hgr'.` + + (See [FORMATS](https://github.com/mabseher/htd/blob/master/FORMATS.md) for information about the available input formats.) + * `--instance : Read the input graph from file .` + +* Output-Specific Options: + * `--output : Set the output format of the decomposition to .` + * `Permitted Values:` + * `.) td : Use the output format 'td'. (default)` + * `.) human : Provide a human-readable output of the decomposition.` + * `.) width : Provide only the maximum bag size of the decomposition.` + + (See [FORMATS](https://github.com/mabseher/htd/blob/master/FORMATS.md) for information about the available output formats.) + * `--print-progress : Print decomposition progress.` + +* Algorithm Options: + * `--strategy : Set the decomposition strategy which shall be used to .` + * `Permitted Values:` + * `.) random : Use a random vertex ordering.` + * `.) min-fill : Minimum fill ordering algorithm (default)` + * `.) min-degree : Minimum degree ordering algorithm` + * `.) min-separator : Minimum separating vertex set heuristic` + * `.) max-cardinality : Maximum cardinality search ordering algorithm` + * `.) max-cardinality-enhanced : Enhanced maximum cardinality search ordering algorithm (MCS-M)` + * `.) challenge : Use a combination of different decomposition strategies.` + * `--preprocessing : Set the preprocessing strategy which shall be used to .` + * `Permitted Values:` + * `.) none : Do not preprocess the input graph.` + * `.) simple : Use simple preprocessing capabilities.` + * `.) advanced : Use advanced preprocessing capabilities.` + * `.) full : Use the full set of preprocessing capabilities.` + * `--triangulation-minimization : Apply triangulation minimization approach.` + +* Optimization Options: + * `--opt : Iteratively compute a decomposition which optimizes .` + * `Permitted Values:` + * `.) none : Do not perform any optimization. (default)` + * `.) width : Minimize the maximum bag size of the computed decomposition.` + * `--iterations : Set the number of iterations to be performed during optimization to (0 = infinite). (Default: 10)` + * `--patience : Terminate the algorithm if more than iterations did not lead to an improvement (-1 = infinite). (Default: -1)` + +### Using htd as a developer + +The following example code uses the most important features of **htd**. + +A full API documentation can be generated via `make doc` (requires [Doxygen](http://www.doxygen.org/)): + +```cpp +#include + +#include +#include +#include + +//Create a management instance of the 'htd' library in order to allow centralized configuration. +std::unique_ptr manager(htd::createManagementInstance(htd::Id::FIRST)); + +/** + * Sample fitness function which minimizes width and height of the decomposition. + * + * Width is of higher priority than height, i.e., at first, the width is minimized + * and if two decompositions have the same width, the one of lower height is chosen. + */ +class FitnessFunction : public htd::ITreeDecompositionFitnessFunction +{ + public: + FitnessFunction(void) + { + + } + + ~FitnessFunction() + { + + } + + htd::FitnessEvaluation * fitness(const htd::IMultiHypergraph & graph, + const htd::ITreeDecomposition & decomposition) const + { + HTD_UNUSED(graph) + + /** + * Here we specify the fitness evaluation for a given decomposition. + * In this case, we select the maximum bag size and the height. + */ + return new htd::FitnessEvaluation(2, + -(double)(decomposition.maximumBagSize()), + -(double)(decomposition.height())); + } + + FitnessFunction * clone(void) const + { + return new FitnessFunction(); + } +}; + +/** + * Signal handling procedure. + */ +void handleSignal(int signal) +{ + switch (signal) + { + case SIGINT: + case SIGTERM: + { + manager->terminate(); + + break; + } + default: + { + break; + } + } + + std::signal(signal, handleSignal); +} + +int main(int, const char * const * const) +{ + std::signal(SIGINT, handleSignal); + std::signal(SIGTERM, handleSignal); + + std::srand(0); + + // Create a new graph instance which can handle (multi-)hyperedges. + htd::IMutableMultiHypergraph * graph = + manager->multiHypergraphFactory().createInstance(); + + /** + * Add five vertices to the sample graph. + * The vertices of a graph are numbered + * in ascending order starting from 1. + */ + graph->addVertices(5); + + // Add two edges to the graph. + graph->addEdge(1, 2); + graph->addEdge(2, 3); + + // Add a hyperedge to the graph. + graph->addEdge(std::vector { 5, 4, 3 }); + + // Create an instance of the fitness function. + FitnessFunction fitnessFunction; + + /** + * This operation changes the root of a given decomposition so that the provided + * fitness function is maximized. When no fitness function is provided to the + * constructor, the constructed optimization operation does not perform any + * optimization and only applies provided manipulations. + */ + htd::TreeDecompositionOptimizationOperation * operation = + new htd::TreeDecompositionOptimizationOperation(manager.get(), fitnessFunction); + + /** + * Set the previously created management instance to support graceful termination. + */ + operation->setManagementInstance(manager.get()); + + /** + * Set the vertex selections strategy (default = exhaustive). + * + * In this case, we want to select (at most) 10 vertices of the input decomposition randomly. + */ + operation->setVertexSelectionStrategy(new htd::RandomVertexSelectionStrategy(10)); + + /** + * Set desired manipulations. In this case we want a nice (= normalized) tree decomposition. + */ + operation->addManipulationOperation(new htd::NormalizationOperation(manager.get())); + + /** + * Optionally, we can set the vertex elimination algorithm. + * We decide to use the min-degree heuristic in this case. + */ + manager->orderingAlgorithmFactory() + .setConstructionTemplate(new htd::MinDegreeOrderingAlgorithm(manager.get())); + + // Get the default tree decomposition algorithm. One can also choose a custom one. + htd::ITreeDecompositionAlgorithm * baseAlgorithm = + manager->treeDecompositionAlgorithmFactory().createInstance(); + + /** + * Set the optimization operation as manipulation operation in order + * to choose the optimal root reducing height of the tree decomposition. + */ + baseAlgorithm->addManipulationOperation(operation); + + /** + * Create a new instance of htd::IterativeImprovementTreeDecompositionAlgorithm based + * on the base algorithm and the fitness function. Note that the fitness function can + * be an arbiraty one and can differ from the one used in the optimization operation. + */ + htd::IterativeImprovementTreeDecompositionAlgorithm algorithm(manager.get(), + baseAlgorithm, + fitnessFunction); + + /** + * Set the maximum number of iterations after which the best decomposition with + * respect to the fitness function shall be returned. Use value 1 to make the + * iterative algorithm return the first decomposition found. + */ + algorithm.setIterationCount(10); + + /** + * Set the maximum number of iterations without improvement after which the algorithm returns + * best decomposition with respect to the fitness function found so far. A limit of 0 aborts + * the algorithm after the first non-improving solution has been found, i.e. the algorithm + * will perform a simple hill-climbing approach. + */ + algorithm.setNonImprovementLimit(3); + + // Record the optimal maximal bag size of the tree decomposition to allow printing the progress. + std::size_t optimalBagSize = (std::size_t)-1; + + /** + * Compute the decomposition. Note that the additional, optional parameter of the function + * computeDecomposition() in case of htd::IterativeImprovementTreeDecompositionAlgorithm + * can be used to intercept every new decomposition. In this case we output some + * intermediate information upon perceiving an improved decompostion. + */ + htd::ITreeDecomposition * decomposition = + algorithm.computeDecomposition(*graph, [&](const htd::IMultiHypergraph & graph, + const htd::ITreeDecomposition & decomposition, + const htd::FitnessEvaluation & fitness) + { + // Disable warnings concerning unused variables. + HTD_UNUSED(graph) + HTD_UNUSED(decomposition) + + std::size_t bagSize = -fitness.at(0); + + /** + * After each improvement we print the current optimal + * width + 1 and the time when the decomposition was found. + */ + if (bagSize < optimalBagSize) + { + optimalBagSize = bagSize; + + std::chrono::milliseconds::rep msSinceEpoch = + std::chrono::duration_cast + (std::chrono::system_clock::now().time_since_epoch()).count(); + + std::cout << "c status " << optimalBagSize << " " << msSinceEpoch << std::endl; + } + }); + + // If a decomposition was found we want to print it to stdout. + if (decomposition != nullptr) + { + /** + * Check whether the algorithm indeed computed a valid decomposition. + * + * algorithm.isSafelyInterruptible() for decomposition algorithms allows to + * find out if the algorithm is safely interruptible. If the getter returns + * true, the algorithm guarantees that the computed decomposition is indeed + * a valid one and that all manipulations and all labelings were applied + * successfully. + */ + if (!manager->isTerminated() || algorithm.isSafelyInterruptible()) + { + // Print the height of the decomposition to stdout. + std::cout << decomposition->height() << std::endl; + + // Print the size of the largest bag of the decomposition to stdout. + std::cout << decomposition->maximumBagSize() << std::endl; + } + + delete decomposition; + } + + delete graph; + + return 0; +} +``` + +## LICENSE + +**htd** is released under the GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007. + +A copy of the license should be provided with the system, otherwise see [http://www.gnu.org/licenses/](http://www.gnu.org/licenses/). diff --git a/solvers/htd-master/appveyor.yml b/solvers/htd-master/appveyor.yml new file mode 100644 index 0000000..8094681 --- /dev/null +++ b/solvers/htd-master/appveyor.yml @@ -0,0 +1,66 @@ +version: '{build}' + +image: + - Visual Studio 2015 + - Visual Studio 2017 + +environment: + matrix: + - BUILD_SHARED_LIBS: True + HTD_USE_EXTENDED_IDENTIFIERS: True + - BUILD_SHARED_LIBS: True + HTD_USE_EXTENDED_IDENTIFIERS: False + - BUILD_SHARED_LIBS: False + HTD_USE_EXTENDED_IDENTIFIERS: True + - BUILD_SHARED_LIBS: False + HTD_USE_EXTENDED_IDENTIFIERS: False + +platform: + - Win32 + - x64 + +configuration: + - Release + - Debug + +build: + verbosity: minimal + +before_build: +- ps: | + Write-Output "Configuration: $env:CONFIGURATION" + Write-Output "Platform: $env:PLATFORM" + $generator = switch ($env:APPVEYOR_BUILD_WORKER_IMAGE) + { + "Visual Studio 2015" {"Visual Studio 14 2015"} + "Visual Studio 2017" {"Visual Studio 15 2017"} + } + if ($env:PLATFORM -eq "x64") + { + $generator = "$generator Win64" + } + if ($env:APPVEYOR_BUILD_WORKER_IMAGE -eq "Visual Studio 2015") + { + del "C:\Program Files (x86)\MSBuild\14.0\Microsoft.Common.targets\ImportAfter\Xamarin.Common.targets" + } + +build_script: +- ps: | + md _build -Force | Out-Null + cd _build + & cmake -G "$generator" -DCMAKE_CONFIGURATION_TYPES="Debug;Release" -DBUILD_SHARED_LIBS=$env:BUILD_SHARED_LIBS -DHTD_USE_EXTENDED_IDENTIFIERS=$env:HTD_USE_EXTENDED_IDENTIFIERS .. + if ($LastExitCode -ne 0) { + throw "Exec: $ErrorMessage" + } + & cmake --build . --config $env:CONFIGURATION + if ($LastExitCode -ne 0) { + throw "Exec: $ErrorMessage" + } + +test_script: +- ps: | + & ctest -VV -C $env:CONFIGURATION --output-on-failure + Push-AppveyorArtifact "Testing/Temporary/LastTest.log" + if ($LastExitCode -ne 0) { + throw "Exec: $ErrorMessage" + } diff --git a/solvers/htd-master/cmake/templates/Doxyfile.in b/solvers/htd-master/cmake/templates/Doxyfile.in new file mode 100644 index 0000000..625f413 --- /dev/null +++ b/solvers/htd-master/cmake/templates/Doxyfile.in @@ -0,0 +1,2344 @@ +# Doxyfile 1.8.6 + +# This file describes the settings to be used by the documentation system +# doxygen (www.doxygen.org) for a project. +# +# All text after a double hash (##) is considered a comment and is placed in +# front of the TAG it is preceding. +# +# All text after a single hash (#) is considered a comment and will be ignored. +# The format is: +# TAG = value [value, ...] +# For lists, items can also be appended using: +# TAG += value [value, ...] +# Values that contain spaces should be placed between quotes (\" \"). + +#--------------------------------------------------------------------------- +# Project related configuration options +#--------------------------------------------------------------------------- + +# This tag specifies the encoding used for all characters in the config file +# that follow. The default is UTF-8 which is also the encoding used for all text +# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv +# built into libc) for the transcoding. See http://www.gnu.org/software/libiconv +# for the list of possible encodings. +# The default value is: UTF-8. + +DOXYFILE_ENCODING = UTF-8 + +# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by +# double-quotes, unless you are using Doxywizard) that should identify the +# project for which the documentation is generated. This name is used in the +# title of most generated pages and in a few other places. +# The default value is: My Project. + +PROJECT_NAME = htd + +# The PROJECT_NUMBER tag can be used to enter a project or revision number. This +# could be handy for archiving the generated documentation or if some version +# control system is used. + +PROJECT_NUMBER = + +# Using the PROJECT_BRIEF tag one can provide an optional one line description +# for a project that appears at the top of each page and should give viewer a +# quick idea about the purpose of the project. Keep the description short. + +PROJECT_BRIEF = + +# With the PROJECT_LOGO tag one can specify an logo or icon that is included in +# the documentation. The maximum height of the logo should not exceed 55 pixels +# and the maximum width should not exceed 200 pixels. Doxygen will copy the logo +# to the output directory. + +PROJECT_LOGO = + +# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path +# into which the generated documentation will be written. If a relative path is +# entered, it will be relative to the location where doxygen was started. If +# left blank the current directory will be used. + +OUTPUT_DIRECTORY = @CMAKE_CURRENT_BINARY_DIR@/doc + +# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create 4096 sub- +# directories (in 2 levels) under the output directory of each output format and +# will distribute the generated files over these directories. Enabling this +# option can be useful when feeding doxygen a huge amount of source files, where +# putting all generated files in the same directory would otherwise causes +# performance problems for the file system. +# The default value is: NO. + +CREATE_SUBDIRS = NO + +# The OUTPUT_LANGUAGE tag is used to specify the language in which all +# documentation generated by doxygen is written. Doxygen will use this +# information to generate all constant output in the proper language. +# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese, +# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States), +# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian, +# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages), +# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian, +# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian, +# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish, +# Ukrainian and Vietnamese. +# The default value is: English. + +OUTPUT_LANGUAGE = English + +# If the BRIEF_MEMBER_DESC tag is set to YES doxygen will include brief member +# descriptions after the members that are listed in the file and class +# documentation (similar to Javadoc). Set to NO to disable this. +# The default value is: YES. + +BRIEF_MEMBER_DESC = YES + +# If the REPEAT_BRIEF tag is set to YES doxygen will prepend the brief +# description of a member or function before the detailed description +# +# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the +# brief descriptions will be completely suppressed. +# The default value is: YES. + +REPEAT_BRIEF = YES + +# This tag implements a quasi-intelligent brief description abbreviator that is +# used to form the text in various listings. Each string in this list, if found +# as the leading text of the brief description, will be stripped from the text +# and the result, after processing the whole list, is used as the annotated +# text. Otherwise, the brief description is used as-is. If left blank, the +# following values are used ($name is automatically replaced with the name of +# the entity):The $name class, The $name widget, The $name file, is, provides, +# specifies, contains, represents, a, an and the. + +ABBREVIATE_BRIEF = "The $name class" \ + "The $name widget" \ + "The $name file" \ + is \ + provides \ + specifies \ + contains \ + represents \ + a \ + an \ + the + +# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then +# doxygen will generate a detailed section even if there is only a brief +# description. +# The default value is: NO. + +ALWAYS_DETAILED_SEC = NO + +# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all +# inherited members of a class in the documentation of that class as if those +# members were ordinary class members. Constructors, destructors and assignment +# operators of the base classes will not be shown. +# The default value is: NO. + +INLINE_INHERITED_MEMB = NO + +# If the FULL_PATH_NAMES tag is set to YES doxygen will prepend the full path +# before files name in the file list and in the header files. If set to NO the +# shortest path that makes the file name unique will be used +# The default value is: YES. + +FULL_PATH_NAMES = YES + +# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path. +# Stripping is only done if one of the specified strings matches the left-hand +# part of the path. The tag can be used to show relative paths in the file list. +# If left blank the directory from which doxygen is run is used as the path to +# strip. +# +# Note that you can specify absolute paths here, but also relative paths, which +# will be relative from the directory where doxygen is started. +# This tag requires that the tag FULL_PATH_NAMES is set to YES. + +STRIP_FROM_PATH = + +# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the +# path mentioned in the documentation of a class, which tells the reader which +# header file to include in order to use a class. If left blank only the name of +# the header file containing the class definition is used. Otherwise one should +# specify the list of include paths that are normally passed to the compiler +# using the -I flag. + +STRIP_FROM_INC_PATH = + +# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but +# less readable) file names. This can be useful is your file systems doesn't +# support long names like on DOS, Mac, or CD-ROM. +# The default value is: NO. + +SHORT_NAMES = NO + +# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the +# first line (until the first dot) of a Javadoc-style comment as the brief +# description. If set to NO, the Javadoc-style will behave just like regular Qt- +# style comments (thus requiring an explicit @brief command for a brief +# description.) +# The default value is: NO. + +JAVADOC_AUTOBRIEF = NO + +# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first +# line (until the first dot) of a Qt-style comment as the brief description. If +# set to NO, the Qt-style will behave just like regular Qt-style comments (thus +# requiring an explicit \brief command for a brief description.) +# The default value is: NO. + +QT_AUTOBRIEF = NO + +# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a +# multi-line C++ special comment block (i.e. a block of //! or /// comments) as +# a brief description. This used to be the default behavior. The new default is +# to treat a multi-line C++ comment block as a detailed description. Set this +# tag to YES if you prefer the old behavior instead. +# +# Note that setting this tag to YES also means that rational rose comments are +# not recognized any more. +# The default value is: NO. + +MULTILINE_CPP_IS_BRIEF = NO + +# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the +# documentation from any documented member that it re-implements. +# The default value is: YES. + +INHERIT_DOCS = YES + +# If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce a +# new page for each member. If set to NO, the documentation of a member will be +# part of the file/class/namespace that contains it. +# The default value is: NO. + +SEPARATE_MEMBER_PAGES = NO + +# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen +# uses this value to replace tabs by spaces in code fragments. +# Minimum value: 1, maximum value: 16, default value: 4. + +TAB_SIZE = 4 + +# This tag can be used to specify a number of aliases that act as commands in +# the documentation. An alias has the form: +# name=value +# For example adding +# "sideeffect=@par Side Effects:\n" +# will allow you to put the command \sideeffect (or @sideeffect) in the +# documentation, which will result in a user-defined paragraph with heading +# "Side Effects:". You can put \n's in the value part of an alias to insert +# newlines. + +ALIASES = + +# This tag can be used to specify a number of word-keyword mappings (TCL only). +# A mapping has the form "name=value". For example adding "class=itcl::class" +# will allow you to use the command class in the itcl::class meaning. + +TCL_SUBST = + +# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources +# only. Doxygen will then generate output that is more tailored for C. For +# instance, some of the names that are used will be different. The list of all +# members will be omitted, etc. +# The default value is: NO. + +OPTIMIZE_OUTPUT_FOR_C = NO + +# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or +# Python sources only. Doxygen will then generate output that is more tailored +# for that language. For instance, namespaces will be presented as packages, +# qualified scopes will look different, etc. +# The default value is: NO. + +OPTIMIZE_OUTPUT_JAVA = NO + +# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran +# sources. Doxygen will then generate output that is tailored for Fortran. +# The default value is: NO. + +OPTIMIZE_FOR_FORTRAN = NO + +# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL +# sources. Doxygen will then generate output that is tailored for VHDL. +# The default value is: NO. + +OPTIMIZE_OUTPUT_VHDL = NO + +# Doxygen selects the parser to use depending on the extension of the files it +# parses. With this tag you can assign which parser to use for a given +# extension. Doxygen has a built-in mapping, but you can override or extend it +# using this tag. The format is ext=language, where ext is a file extension, and +# language is one of the parsers supported by doxygen: IDL, Java, Javascript, +# C#, C, C++, D, PHP, Objective-C, Python, Fortran, VHDL. For instance to make +# doxygen treat .inc files as Fortran files (default is PHP), and .f files as C +# (default is Fortran), use: inc=Fortran f=C. +# +# Note For files without extension you can use no_extension as a placeholder. +# +# Note that for custom extensions you also need to set FILE_PATTERNS otherwise +# the files are not read by doxygen. + +EXTENSION_MAPPING = + +# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments +# according to the Markdown format, which allows for more readable +# documentation. See http://daringfireball.net/projects/markdown/ for details. +# The output of markdown processing is further processed by doxygen, so you can +# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in +# case of backward compatibilities issues. +# The default value is: YES. + +MARKDOWN_SUPPORT = YES + +# When enabled doxygen tries to link words that correspond to documented +# classes, or namespaces to their corresponding documentation. Such a link can +# be prevented in individual cases by by putting a % sign in front of the word +# or globally by setting AUTOLINK_SUPPORT to NO. +# The default value is: YES. + +AUTOLINK_SUPPORT = YES + +# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want +# to include (a tag file for) the STL sources as input, then you should set this +# tag to YES in order to let doxygen match functions declarations and +# definitions whose arguments contain STL classes (e.g. func(std::string); +# versus func(std::string) {}). This also make the inheritance and collaboration +# diagrams that involve STL classes more complete and accurate. +# The default value is: NO. + +BUILTIN_STL_SUPPORT = NO + +# If you use Microsoft's C++/CLI language, you should set this option to YES to +# enable parsing support. +# The default value is: NO. + +CPP_CLI_SUPPORT = NO + +# Set the SIP_SUPPORT tag to YES if your project consists of sip (see: +# http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen +# will parse them like normal C++ but will assume all classes use public instead +# of private inheritance when no explicit protection keyword is present. +# The default value is: NO. + +SIP_SUPPORT = NO + +# For Microsoft's IDL there are propget and propput attributes to indicate +# getter and setter methods for a property. Setting this option to YES will make +# doxygen to replace the get and set methods by a property in the documentation. +# This will only work if the methods are indeed getting or setting a simple +# type. If this is not the case, or you want to show the methods anyway, you +# should set this option to NO. +# The default value is: YES. + +IDL_PROPERTY_SUPPORT = YES + +# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC +# tag is set to YES, then doxygen will reuse the documentation of the first +# member in the group (if any) for the other members of the group. By default +# all members of a group must be documented explicitly. +# The default value is: NO. + +DISTRIBUTE_GROUP_DOC = NO + +# Set the SUBGROUPING tag to YES to allow class member groups of the same type +# (for instance a group of public functions) to be put as a subgroup of that +# type (e.g. under the Public Functions section). Set it to NO to prevent +# subgrouping. Alternatively, this can be done per class using the +# \nosubgrouping command. +# The default value is: YES. + +SUBGROUPING = YES + +# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions +# are shown inside the group in which they are included (e.g. using \ingroup) +# instead of on a separate page (for HTML and Man pages) or section (for LaTeX +# and RTF). +# +# Note that this feature does not work in combination with +# SEPARATE_MEMBER_PAGES. +# The default value is: NO. + +INLINE_GROUPED_CLASSES = NO + +# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions +# with only public data fields or simple typedef fields will be shown inline in +# the documentation of the scope in which they are defined (i.e. file, +# namespace, or group documentation), provided this scope is documented. If set +# to NO, structs, classes, and unions are shown on a separate page (for HTML and +# Man pages) or section (for LaTeX and RTF). +# The default value is: NO. + +INLINE_SIMPLE_STRUCTS = NO + +# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or +# enum is documented as struct, union, or enum with the name of the typedef. So +# typedef struct TypeS {} TypeT, will appear in the documentation as a struct +# with name TypeT. When disabled the typedef will appear as a member of a file, +# namespace, or class. And the struct will be named TypeS. This can typically be +# useful for C code in case the coding convention dictates that all compound +# types are typedef'ed and only the typedef is referenced, never the tag name. +# The default value is: NO. + +TYPEDEF_HIDES_STRUCT = NO + +# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This +# cache is used to resolve symbols given their name and scope. Since this can be +# an expensive process and often the same symbol appears multiple times in the +# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small +# doxygen will become slower. If the cache is too large, memory is wasted. The +# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range +# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536 +# symbols. At the end of a run doxygen will report the cache usage and suggest +# the optimal cache size from a speed point of view. +# Minimum value: 0, maximum value: 9, default value: 0. + +LOOKUP_CACHE_SIZE = 0 + +#--------------------------------------------------------------------------- +# Build related configuration options +#--------------------------------------------------------------------------- + +# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in +# documentation are documented, even if no documentation was available. Private +# class members and static file members will be hidden unless the +# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES. +# Note: This will also disable the warnings about undocumented members that are +# normally produced when WARNINGS is set to YES. +# The default value is: NO. + +EXTRACT_ALL = NO + +# If the EXTRACT_PRIVATE tag is set to YES all private members of a class will +# be included in the documentation. +# The default value is: NO. + +EXTRACT_PRIVATE = NO + +# If the EXTRACT_PACKAGE tag is set to YES all members with package or internal +# scope will be included in the documentation. +# The default value is: NO. + +EXTRACT_PACKAGE = NO + +# If the EXTRACT_STATIC tag is set to YES all static members of a file will be +# included in the documentation. +# The default value is: NO. + +EXTRACT_STATIC = NO + +# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) defined +# locally in source files will be included in the documentation. If set to NO +# only classes defined in header files are included. Does not have any effect +# for Java sources. +# The default value is: YES. + +EXTRACT_LOCAL_CLASSES = YES + +# This flag is only useful for Objective-C code. When set to YES local methods, +# which are defined in the implementation section but not in the interface are +# included in the documentation. If set to NO only methods in the interface are +# included. +# The default value is: NO. + +EXTRACT_LOCAL_METHODS = NO + +# If this flag is set to YES, the members of anonymous namespaces will be +# extracted and appear in the documentation as a namespace called +# 'anonymous_namespace{file}', where file will be replaced with the base name of +# the file that contains the anonymous namespace. By default anonymous namespace +# are hidden. +# The default value is: NO. + +EXTRACT_ANON_NSPACES = NO + +# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all +# undocumented members inside documented classes or files. If set to NO these +# members will be included in the various overviews, but no documentation +# section is generated. This option has no effect if EXTRACT_ALL is enabled. +# The default value is: NO. + +HIDE_UNDOC_MEMBERS = NO + +# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all +# undocumented classes that are normally visible in the class hierarchy. If set +# to NO these classes will be included in the various overviews. This option has +# no effect if EXTRACT_ALL is enabled. +# The default value is: NO. + +HIDE_UNDOC_CLASSES = NO + +# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend +# (class|struct|union) declarations. If set to NO these declarations will be +# included in the documentation. +# The default value is: NO. + +HIDE_FRIEND_COMPOUNDS = NO + +# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any +# documentation blocks found inside the body of a function. If set to NO these +# blocks will be appended to the function's detailed documentation block. +# The default value is: NO. + +HIDE_IN_BODY_DOCS = NO + +# The INTERNAL_DOCS tag determines if documentation that is typed after a +# \internal command is included. If the tag is set to NO then the documentation +# will be excluded. Set it to YES to include the internal documentation. +# The default value is: NO. + +INTERNAL_DOCS = NO + +# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file +# names in lower-case letters. If set to YES upper-case letters are also +# allowed. This is useful if you have classes or files whose names only differ +# in case and if your file system supports case sensitive file names. Windows +# and Mac users are advised to set this option to NO. +# The default value is: system dependent. + +CASE_SENSE_NAMES = NO + +# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with +# their full class and namespace scopes in the documentation. If set to YES the +# scope will be hidden. +# The default value is: NO. + +HIDE_SCOPE_NAMES = NO + +# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of +# the files that are included by a file in the documentation of that file. +# The default value is: YES. + +SHOW_INCLUDE_FILES = YES + +# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each +# grouped member an include statement to the documentation, telling the reader +# which file to include in order to use the member. +# The default value is: NO. + +SHOW_GROUPED_MEMB_INC = NO + +# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include +# files with double quotes in the documentation rather than with sharp brackets. +# The default value is: NO. + +FORCE_LOCAL_INCLUDES = NO + +# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the +# documentation for inline members. +# The default value is: YES. + +INLINE_INFO = YES + +# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the +# (detailed) documentation of file and class members alphabetically by member +# name. If set to NO the members will appear in declaration order. +# The default value is: YES. + +SORT_MEMBER_DOCS = YES + +# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief +# descriptions of file, namespace and class members alphabetically by member +# name. If set to NO the members will appear in declaration order. Note that +# this will also influence the order of the classes in the class list. +# The default value is: NO. + +SORT_BRIEF_DOCS = NO + +# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the +# (brief and detailed) documentation of class members so that constructors and +# destructors are listed first. If set to NO the constructors will appear in the +# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS. +# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief +# member documentation. +# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting +# detailed member documentation. +# The default value is: NO. + +SORT_MEMBERS_CTORS_1ST = NO + +# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy +# of group names into alphabetical order. If set to NO the group names will +# appear in their defined order. +# The default value is: NO. + +SORT_GROUP_NAMES = NO + +# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by +# fully-qualified names, including namespaces. If set to NO, the class list will +# be sorted only by class name, not including the namespace part. +# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. +# Note: This option applies only to the class list, not to the alphabetical +# list. +# The default value is: NO. + +SORT_BY_SCOPE_NAME = NO + +# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper +# type resolution of all parameters of a function it will reject a match between +# the prototype and the implementation of a member function even if there is +# only one candidate or it is obvious which candidate to choose by doing a +# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still +# accept a match between prototype and implementation in such cases. +# The default value is: NO. + +STRICT_PROTO_MATCHING = NO + +# The GENERATE_TODOLIST tag can be used to enable ( YES) or disable ( NO) the +# todo list. This list is created by putting \todo commands in the +# documentation. +# The default value is: YES. + +GENERATE_TODOLIST = YES + +# The GENERATE_TESTLIST tag can be used to enable ( YES) or disable ( NO) the +# test list. This list is created by putting \test commands in the +# documentation. +# The default value is: YES. + +GENERATE_TESTLIST = YES + +# The GENERATE_BUGLIST tag can be used to enable ( YES) or disable ( NO) the bug +# list. This list is created by putting \bug commands in the documentation. +# The default value is: YES. + +GENERATE_BUGLIST = YES + +# The GENERATE_DEPRECATEDLIST tag can be used to enable ( YES) or disable ( NO) +# the deprecated list. This list is created by putting \deprecated commands in +# the documentation. +# The default value is: YES. + +GENERATE_DEPRECATEDLIST= YES + +# The ENABLED_SECTIONS tag can be used to enable conditional documentation +# sections, marked by \if ... \endif and \cond +# ... \endcond blocks. + +ENABLED_SECTIONS = + +# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the +# initial value of a variable or macro / define can have for it to appear in the +# documentation. If the initializer consists of more lines than specified here +# it will be hidden. Use a value of 0 to hide initializers completely. The +# appearance of the value of individual variables and macros / defines can be +# controlled using \showinitializer or \hideinitializer command in the +# documentation regardless of this setting. +# Minimum value: 0, maximum value: 10000, default value: 30. + +MAX_INITIALIZER_LINES = 30 + +# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at +# the bottom of the documentation of classes and structs. If set to YES the list +# will mention the files that were used to generate the documentation. +# The default value is: YES. + +SHOW_USED_FILES = YES + +# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This +# will remove the Files entry from the Quick Index and from the Folder Tree View +# (if specified). +# The default value is: YES. + +SHOW_FILES = YES + +# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces +# page. This will remove the Namespaces entry from the Quick Index and from the +# Folder Tree View (if specified). +# The default value is: YES. + +SHOW_NAMESPACES = YES + +# The FILE_VERSION_FILTER tag can be used to specify a program or script that +# doxygen should invoke to get the current version for each file (typically from +# the version control system). Doxygen will invoke the program by executing (via +# popen()) the command command input-file, where command is the value of the +# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided +# by doxygen. Whatever the program writes to standard output is used as the file +# version. For an example see the documentation. + +FILE_VERSION_FILTER = + +# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed +# by doxygen. The layout file controls the global structure of the generated +# output files in an output format independent way. To create the layout file +# that represents doxygen's defaults, run doxygen with the -l option. You can +# optionally specify a file name after the option, if omitted DoxygenLayout.xml +# will be used as the name of the layout file. +# +# Note that if you run doxygen from a directory containing a file called +# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE +# tag is left empty. + +LAYOUT_FILE = + +# The CITE_BIB_FILES tag can be used to specify one or more bib files containing +# the reference definitions. This must be a list of .bib files. The .bib +# extension is automatically appended if omitted. This requires the bibtex tool +# to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info. +# For LaTeX the style of the bibliography can be controlled using +# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the +# search path. Do not use file names with spaces, bibtex cannot handle them. See +# also \cite for info how to create references. + +CITE_BIB_FILES = + +#--------------------------------------------------------------------------- +# Configuration options related to warning and progress messages +#--------------------------------------------------------------------------- + +# The QUIET tag can be used to turn on/off the messages that are generated to +# standard output by doxygen. If QUIET is set to YES this implies that the +# messages are off. +# The default value is: NO. + +QUIET = NO + +# The WARNINGS tag can be used to turn on/off the warning messages that are +# generated to standard error ( stderr) by doxygen. If WARNINGS is set to YES +# this implies that the warnings are on. +# +# Tip: Turn warnings on while writing the documentation. +# The default value is: YES. + +WARNINGS = YES + +# If the WARN_IF_UNDOCUMENTED tag is set to YES, then doxygen will generate +# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag +# will automatically be disabled. +# The default value is: YES. + +WARN_IF_UNDOCUMENTED = YES + +# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for +# potential errors in the documentation, such as not documenting some parameters +# in a documented function, or documenting parameters that don't exist or using +# markup commands wrongly. +# The default value is: YES. + +WARN_IF_DOC_ERROR = YES + +# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that +# are documented, but have no documentation for their parameters or return +# value. If set to NO doxygen will only warn about wrong or incomplete parameter +# documentation, but not about the absence of documentation. +# The default value is: NO. + +WARN_NO_PARAMDOC = NO + +# The WARN_FORMAT tag determines the format of the warning messages that doxygen +# can produce. The string should contain the $file, $line, and $text tags, which +# will be replaced by the file and line number from which the warning originated +# and the warning text. Optionally the format may contain $version, which will +# be replaced by the version of the file (if it could be obtained via +# FILE_VERSION_FILTER) +# The default value is: $file:$line: $text. + +WARN_FORMAT = "$file:$line: $text" + +# The WARN_LOGFILE tag can be used to specify a file to which warning and error +# messages should be written. If left blank the output is written to standard +# error (stderr). + +WARN_LOGFILE = + +#--------------------------------------------------------------------------- +# Configuration options related to the input files +#--------------------------------------------------------------------------- + +# The INPUT tag is used to specify the files and/or directories that contain +# documented source files. You may enter file names like myfile.cpp or +# directories like /usr/src/myproject. Separate the files or directories with +# spaces. +# Note: If this tag is empty the current directory is searched. + +INPUT = @CMAKE_CURRENT_SOURCE_DIR@/include \ + @CMAKE_CURRENT_SOURCE_DIR@/src + +# This tag can be used to specify the character encoding of the source files +# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses +# libiconv (or the iconv built into libc) for the transcoding. See the libiconv +# documentation (see: http://www.gnu.org/software/libiconv) for the list of +# possible encodings. +# The default value is: UTF-8. + +INPUT_ENCODING = UTF-8 + +# If the value of the INPUT tag contains directories, you can use the +# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and +# *.h) to filter out the source-files in the directories. If left blank the +# following patterns are tested:*.c, *.cc, *.cxx, *.cpp, *.c++, *.java, *.ii, +# *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, *.hh, *.hxx, *.hpp, +# *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, *.m, *.markdown, +# *.md, *.mm, *.dox, *.py, *.f90, *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf, +# *.qsf, *.as and *.js. + +FILE_PATTERNS = *.c \ + *.cc \ + *.cxx \ + *.cpp \ + *.c++ \ + *.java \ + *.ii \ + *.ixx \ + *.ipp \ + *.i++ \ + *.inl \ + *.idl \ + *.ddl \ + *.odl \ + *.h \ + *.hh \ + *.hxx \ + *.hpp \ + *.h++ \ + *.cs \ + *.d \ + *.php \ + *.php4 \ + *.php5 \ + *.phtml \ + *.inc \ + *.m \ + *.markdown \ + *.md \ + *.mm \ + *.dox \ + *.py \ + *.f90 \ + *.f \ + *.for \ + *.tcl \ + *.vhd \ + *.vhdl \ + *.ucf \ + *.qsf \ + *.as \ + *.js + +# The RECURSIVE tag can be used to specify whether or not subdirectories should +# be searched for input files as well. +# The default value is: NO. + +RECURSIVE = YES + +# The EXCLUDE tag can be used to specify files and/or directories that should be +# excluded from the INPUT source files. This way you can easily exclude a +# subdirectory from a directory tree whose root is specified with the INPUT tag. +# +# Note that relative paths are relative to the directory from which doxygen is +# run. + +EXCLUDE = @CMAKE_CURRENT_SOURCE_DIR@/include/htd/TemplateInstantiations.hpp \ + @CMAKE_CURRENT_SOURCE_DIR@/src/htd/TemplateInstantiations.cpp + +# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or +# directories that are symbolic links (a Unix file system feature) are excluded +# from the input. +# The default value is: NO. + +EXCLUDE_SYMLINKS = NO + +# If the value of the INPUT tag contains directories, you can use the +# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude +# certain files from those directories. +# +# Note that the wildcards are matched against the file with absolute path, so to +# exclude all test directories for example use the pattern */test/* + +EXCLUDE_PATTERNS = + +# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names +# (namespaces, classes, functions, etc.) that should be excluded from the +# output. The symbol name can be a fully qualified name, a word, or if the +# wildcard * is used, a substring. Examples: ANamespace, AClass, +# AClass::ANamespace, ANamespace::*Test +# +# Note that the wildcards are matched against the file with absolute path, so to +# exclude all test directories use the pattern */test/* + +EXCLUDE_SYMBOLS = + +# The EXAMPLE_PATH tag can be used to specify one or more files or directories +# that contain example code fragments that are included (see the \include +# command). + +EXAMPLE_PATH = + +# If the value of the EXAMPLE_PATH tag contains directories, you can use the +# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and +# *.h) to filter out the source-files in the directories. If left blank all +# files are included. + +EXAMPLE_PATTERNS = * + +# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be +# searched for input files to be used with the \include or \dontinclude commands +# irrespective of the value of the RECURSIVE tag. +# The default value is: NO. + +EXAMPLE_RECURSIVE = NO + +# The IMAGE_PATH tag can be used to specify one or more files or directories +# that contain images that are to be included in the documentation (see the +# \image command). + +IMAGE_PATH = + +# The INPUT_FILTER tag can be used to specify a program that doxygen should +# invoke to filter for each input file. Doxygen will invoke the filter program +# by executing (via popen()) the command: +# +# +# +# where is the value of the INPUT_FILTER tag, and is the +# name of an input file. Doxygen will then use the output that the filter +# program writes to standard output. If FILTER_PATTERNS is specified, this tag +# will be ignored. +# +# Note that the filter must not add or remove lines; it is applied before the +# code is scanned, but not when the output code is generated. If lines are added +# or removed, the anchors will not be placed correctly. + +INPUT_FILTER = + +# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern +# basis. Doxygen will compare the file name with each pattern and apply the +# filter if there is a match. The filters are a list of the form: pattern=filter +# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how +# filters are used. If the FILTER_PATTERNS tag is empty or if none of the +# patterns match the file name, INPUT_FILTER is applied. + +FILTER_PATTERNS = + +# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using +# INPUT_FILTER ) will also be used to filter the input files that are used for +# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES). +# The default value is: NO. + +FILTER_SOURCE_FILES = NO + +# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file +# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and +# it is also possible to disable source filtering for a specific pattern using +# *.ext= (so without naming a filter). +# This tag requires that the tag FILTER_SOURCE_FILES is set to YES. + +FILTER_SOURCE_PATTERNS = + +# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that +# is part of the input, its contents will be placed on the main page +# (index.html). This can be useful if you have a project on for instance GitHub +# and want to reuse the introduction page also for the doxygen output. + +USE_MDFILE_AS_MAINPAGE = + +#--------------------------------------------------------------------------- +# Configuration options related to source browsing +#--------------------------------------------------------------------------- + +# If the SOURCE_BROWSER tag is set to YES then a list of source files will be +# generated. Documented entities will be cross-referenced with these sources. +# +# Note: To get rid of all source code in the generated output, make sure that +# also VERBATIM_HEADERS is set to NO. +# The default value is: NO. + +SOURCE_BROWSER = NO + +# Setting the INLINE_SOURCES tag to YES will include the body of functions, +# classes and enums directly into the documentation. +# The default value is: NO. + +INLINE_SOURCES = NO + +# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any +# special comment blocks from generated source code fragments. Normal C, C++ and +# Fortran comments will always remain visible. +# The default value is: YES. + +STRIP_CODE_COMMENTS = YES + +# If the REFERENCED_BY_RELATION tag is set to YES then for each documented +# function all documented functions referencing it will be listed. +# The default value is: NO. + +REFERENCED_BY_RELATION = NO + +# If the REFERENCES_RELATION tag is set to YES then for each documented function +# all documented entities called/used by that function will be listed. +# The default value is: NO. + +REFERENCES_RELATION = NO + +# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set +# to YES, then the hyperlinks from functions in REFERENCES_RELATION and +# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will +# link to the documentation. +# The default value is: YES. + +REFERENCES_LINK_SOURCE = YES + +# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the +# source code will show a tooltip with additional information such as prototype, +# brief description and links to the definition and documentation. Since this +# will make the HTML file larger and loading of large files a bit slower, you +# can opt to disable this feature. +# The default value is: YES. +# This tag requires that the tag SOURCE_BROWSER is set to YES. + +SOURCE_TOOLTIPS = YES + +# If the USE_HTAGS tag is set to YES then the references to source code will +# point to the HTML generated by the htags(1) tool instead of doxygen built-in +# source browser. The htags tool is part of GNU's global source tagging system +# (see http://www.gnu.org/software/global/global.html). You will need version +# 4.8.6 or higher. +# +# To use it do the following: +# - Install the latest version of global +# - Enable SOURCE_BROWSER and USE_HTAGS in the config file +# - Make sure the INPUT points to the root of the source tree +# - Run doxygen as normal +# +# Doxygen will invoke htags (and that will in turn invoke gtags), so these +# tools must be available from the command line (i.e. in the search path). +# +# The result: instead of the source browser generated by doxygen, the links to +# source code will now point to the output of htags. +# The default value is: NO. +# This tag requires that the tag SOURCE_BROWSER is set to YES. + +USE_HTAGS = NO + +# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a +# verbatim copy of the header file for each class for which an include is +# specified. Set to NO to disable this. +# See also: Section \class. +# The default value is: YES. + +VERBATIM_HEADERS = YES + +#--------------------------------------------------------------------------- +# Configuration options related to the alphabetical class index +#--------------------------------------------------------------------------- + +# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all +# compounds will be generated. Enable this if the project contains a lot of +# classes, structs, unions or interfaces. +# The default value is: YES. + +ALPHABETICAL_INDEX = YES + +# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in +# which the alphabetical index list will be split. +# Minimum value: 1, maximum value: 20, default value: 5. +# This tag requires that the tag ALPHABETICAL_INDEX is set to YES. + +COLS_IN_ALPHA_INDEX = 5 + +# In case all classes in a project start with a common prefix, all classes will +# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag +# can be used to specify a prefix (or a list of prefixes) that should be ignored +# while generating the index headers. +# This tag requires that the tag ALPHABETICAL_INDEX is set to YES. + +IGNORE_PREFIX = + +#--------------------------------------------------------------------------- +# Configuration options related to the HTML output +#--------------------------------------------------------------------------- + +# If the GENERATE_HTML tag is set to YES doxygen will generate HTML output +# The default value is: YES. + +GENERATE_HTML = YES + +# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a +# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of +# it. +# The default directory is: html. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_OUTPUT = html + +# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each +# generated HTML page (for example: .htm, .php, .asp). +# The default value is: .html. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_FILE_EXTENSION = .html + +# The HTML_HEADER tag can be used to specify a user-defined HTML header file for +# each generated HTML page. If the tag is left blank doxygen will generate a +# standard header. +# +# To get valid HTML the header file that includes any scripts and style sheets +# that doxygen needs, which is dependent on the configuration options used (e.g. +# the setting GENERATE_TREEVIEW). It is highly recommended to start with a +# default header using +# doxygen -w html new_header.html new_footer.html new_stylesheet.css +# YourConfigFile +# and then modify the file new_header.html. See also section "Doxygen usage" +# for information on how to generate the default header that doxygen normally +# uses. +# Note: The header is subject to change so you typically have to regenerate the +# default header when upgrading to a newer version of doxygen. For a description +# of the possible markers and block names see the documentation. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_HEADER = + +# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each +# generated HTML page. If the tag is left blank doxygen will generate a standard +# footer. See HTML_HEADER for more information on how to generate a default +# footer and what special commands can be used inside the footer. See also +# section "Doxygen usage" for information on how to generate the default footer +# that doxygen normally uses. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_FOOTER = + +# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style +# sheet that is used by each HTML page. It can be used to fine-tune the look of +# the HTML output. If left blank doxygen will generate a default style sheet. +# See also section "Doxygen usage" for information on how to generate the style +# sheet that doxygen normally uses. +# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as +# it is more robust and this tag (HTML_STYLESHEET) will in the future become +# obsolete. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_STYLESHEET = + +# The HTML_EXTRA_STYLESHEET tag can be used to specify an additional user- +# defined cascading style sheet that is included after the standard style sheets +# created by doxygen. Using this option one can overrule certain style aspects. +# This is preferred over using HTML_STYLESHEET since it does not replace the +# standard style sheet and is therefor more robust against future updates. +# Doxygen will copy the style sheet file to the output directory. For an example +# see the documentation. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_EXTRA_STYLESHEET = + +# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or +# other source files which should be copied to the HTML output directory. Note +# that these files will be copied to the base HTML output directory. Use the +# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these +# files. In the HTML_STYLESHEET file, use the file name only. Also note that the +# files will be copied as-is; there are no commands or markers available. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_EXTRA_FILES = + +# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen +# will adjust the colors in the stylesheet and background images according to +# this color. Hue is specified as an angle on a colorwheel, see +# http://en.wikipedia.org/wiki/Hue for more information. For instance the value +# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300 +# purple, and 360 is red again. +# Minimum value: 0, maximum value: 359, default value: 220. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_COLORSTYLE_HUE = 220 + +# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors +# in the HTML output. For a value of 0 the output will use grayscales only. A +# value of 255 will produce the most vivid colors. +# Minimum value: 0, maximum value: 255, default value: 100. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_COLORSTYLE_SAT = 100 + +# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the +# luminance component of the colors in the HTML output. Values below 100 +# gradually make the output lighter, whereas values above 100 make the output +# darker. The value divided by 100 is the actual gamma applied, so 80 represents +# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not +# change the gamma. +# Minimum value: 40, maximum value: 240, default value: 80. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_COLORSTYLE_GAMMA = 80 + +# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML +# page will contain the date and time when the page was generated. Setting this +# to NO can help when comparing the output of multiple runs. +# The default value is: YES. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_TIMESTAMP = YES + +# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML +# documentation will contain sections that can be hidden and shown after the +# page has loaded. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_DYNAMIC_SECTIONS = NO + +# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries +# shown in the various tree structured indices initially; the user can expand +# and collapse entries dynamically later on. Doxygen will expand the tree to +# such a level that at most the specified number of entries are visible (unless +# a fully collapsed tree already exceeds this amount). So setting the number of +# entries 1 will produce a full collapsed tree by default. 0 is a special value +# representing an infinite number of entries and will result in a full expanded +# tree by default. +# Minimum value: 0, maximum value: 9999, default value: 100. +# This tag requires that the tag GENERATE_HTML is set to YES. + +HTML_INDEX_NUM_ENTRIES = 100 + +# If the GENERATE_DOCSET tag is set to YES, additional index files will be +# generated that can be used as input for Apple's Xcode 3 integrated development +# environment (see: http://developer.apple.com/tools/xcode/), introduced with +# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a +# Makefile in the HTML output directory. Running make will produce the docset in +# that directory and running make install will install the docset in +# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at +# startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html +# for more information. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_DOCSET = NO + +# This tag determines the name of the docset feed. A documentation feed provides +# an umbrella under which multiple documentation sets from a single provider +# (such as a company or product suite) can be grouped. +# The default value is: Doxygen generated docs. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_FEEDNAME = "Doxygen generated docs" + +# This tag specifies a string that should uniquely identify the documentation +# set bundle. This should be a reverse domain-name style string, e.g. +# com.mycompany.MyDocSet. Doxygen will append .docset to the name. +# The default value is: org.doxygen.Project. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_BUNDLE_ID = org.doxygen.Project + +# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify +# the documentation publisher. This should be a reverse domain-name style +# string, e.g. com.mycompany.MyDocSet.documentation. +# The default value is: org.doxygen.Publisher. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_PUBLISHER_ID = org.doxygen.Publisher + +# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher. +# The default value is: Publisher. +# This tag requires that the tag GENERATE_DOCSET is set to YES. + +DOCSET_PUBLISHER_NAME = Publisher + +# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three +# additional HTML index files: index.hhp, index.hhc, and index.hhk. The +# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop +# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on +# Windows. +# +# The HTML Help Workshop contains a compiler that can convert all HTML output +# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML +# files are now used as the Windows 98 help format, and will replace the old +# Windows help format (.hlp) on all Windows platforms in the future. Compressed +# HTML files also contain an index, a table of contents, and you can search for +# words in the documentation. The HTML workshop also contains a viewer for +# compressed HTML files. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_HTMLHELP = NO + +# The CHM_FILE tag can be used to specify the file name of the resulting .chm +# file. You can add a path in front of the file if the result should not be +# written to the html output directory. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +CHM_FILE = + +# The HHC_LOCATION tag can be used to specify the location (absolute path +# including file name) of the HTML help compiler ( hhc.exe). If non-empty +# doxygen will try to run the HTML help compiler on the generated index.hhp. +# The file has to be specified with full path. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +HHC_LOCATION = + +# The GENERATE_CHI flag controls if a separate .chi index file is generated ( +# YES) or that it should be included in the master .chm file ( NO). +# The default value is: NO. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +GENERATE_CHI = NO + +# The CHM_INDEX_ENCODING is used to encode HtmlHelp index ( hhk), content ( hhc) +# and project file content. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +CHM_INDEX_ENCODING = + +# The BINARY_TOC flag controls whether a binary table of contents is generated ( +# YES) or a normal table of contents ( NO) in the .chm file. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +BINARY_TOC = NO + +# The TOC_EXPAND flag can be set to YES to add extra items for group members to +# the table of contents of the HTML help documentation and to the tree view. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTMLHELP is set to YES. + +TOC_EXPAND = NO + +# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and +# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that +# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help +# (.qch) of the generated HTML documentation. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_QHP = NO + +# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify +# the file name of the resulting .qch file. The path specified is relative to +# the HTML output folder. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QCH_FILE = + +# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help +# Project output. For more information please see Qt Help Project / Namespace +# (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace). +# The default value is: org.doxygen.Project. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_NAMESPACE = org.doxygen.Project + +# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt +# Help Project output. For more information please see Qt Help Project / Virtual +# Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual- +# folders). +# The default value is: doc. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_VIRTUAL_FOLDER = doc + +# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom +# filter to add. For more information please see Qt Help Project / Custom +# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- +# filters). +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_CUST_FILTER_NAME = + +# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the +# custom filter to add. For more information please see Qt Help Project / Custom +# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- +# filters). +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_CUST_FILTER_ATTRS = + +# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this +# project's filter section matches. Qt Help Project / Filter Attributes (see: +# http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes). +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHP_SECT_FILTER_ATTRS = + +# The QHG_LOCATION tag can be used to specify the location of Qt's +# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the +# generated .qhp file. +# This tag requires that the tag GENERATE_QHP is set to YES. + +QHG_LOCATION = + +# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be +# generated, together with the HTML files, they form an Eclipse help plugin. To +# install this plugin and make it available under the help contents menu in +# Eclipse, the contents of the directory containing the HTML and XML files needs +# to be copied into the plugins directory of eclipse. The name of the directory +# within the plugins directory should be the same as the ECLIPSE_DOC_ID value. +# After copying Eclipse needs to be restarted before the help appears. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_ECLIPSEHELP = NO + +# A unique identifier for the Eclipse help plugin. When installing the plugin +# the directory name containing the HTML and XML files should also have this +# name. Each documentation set should have its own identifier. +# The default value is: org.doxygen.Project. +# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES. + +ECLIPSE_DOC_ID = org.doxygen.Project + +# If you want full control over the layout of the generated HTML pages it might +# be necessary to disable the index and replace it with your own. The +# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top +# of each HTML page. A value of NO enables the index and the value YES disables +# it. Since the tabs in the index contain the same information as the navigation +# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +DISABLE_INDEX = NO + +# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index +# structure should be generated to display hierarchical information. If the tag +# value is set to YES, a side panel will be generated containing a tree-like +# index structure (just like the one that is generated for HTML Help). For this +# to work a browser that supports JavaScript, DHTML, CSS and frames is required +# (i.e. any modern browser). Windows users are probably better off using the +# HTML help feature. Via custom stylesheets (see HTML_EXTRA_STYLESHEET) one can +# further fine-tune the look of the index. As an example, the default style +# sheet generated by doxygen has an example that shows how to put an image at +# the root of the tree instead of the PROJECT_NAME. Since the tree basically has +# the same information as the tab index, you could consider setting +# DISABLE_INDEX to YES when enabling this option. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +GENERATE_TREEVIEW = NO + +# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that +# doxygen will group on one line in the generated HTML documentation. +# +# Note that a value of 0 will completely suppress the enum values from appearing +# in the overview section. +# Minimum value: 0, maximum value: 20, default value: 4. +# This tag requires that the tag GENERATE_HTML is set to YES. + +ENUM_VALUES_PER_LINE = 4 + +# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used +# to set the initial width (in pixels) of the frame in which the tree is shown. +# Minimum value: 0, maximum value: 1500, default value: 250. +# This tag requires that the tag GENERATE_HTML is set to YES. + +TREEVIEW_WIDTH = 250 + +# When the EXT_LINKS_IN_WINDOW option is set to YES doxygen will open links to +# external symbols imported via tag files in a separate window. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +EXT_LINKS_IN_WINDOW = NO + +# Use this tag to change the font size of LaTeX formulas included as images in +# the HTML documentation. When you change the font size after a successful +# doxygen run you need to manually remove any form_*.png images from the HTML +# output directory to force them to be regenerated. +# Minimum value: 8, maximum value: 50, default value: 10. +# This tag requires that the tag GENERATE_HTML is set to YES. + +FORMULA_FONTSIZE = 10 + +# Use the FORMULA_TRANPARENT tag to determine whether or not the images +# generated for formulas are transparent PNGs. Transparent PNGs are not +# supported properly for IE 6.0, but are supported on all modern browsers. +# +# Note that when changing this option you need to delete any form_*.png files in +# the HTML output directory before the changes have effect. +# The default value is: YES. +# This tag requires that the tag GENERATE_HTML is set to YES. + +FORMULA_TRANSPARENT = YES + +# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see +# http://www.mathjax.org) which uses client side Javascript for the rendering +# instead of using prerendered bitmaps. Use this if you do not have LaTeX +# installed or if you want to formulas look prettier in the HTML output. When +# enabled you may also need to install MathJax separately and configure the path +# to it using the MATHJAX_RELPATH option. +# The default value is: NO. +# This tag requires that the tag GENERATE_HTML is set to YES. + +USE_MATHJAX = NO + +# When MathJax is enabled you can set the default output format to be used for +# the MathJax output. See the MathJax site (see: +# http://docs.mathjax.org/en/latest/output.html) for more details. +# Possible values are: HTML-CSS (which is slower, but has the best +# compatibility), NativeMML (i.e. MathML) and SVG. +# The default value is: HTML-CSS. +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_FORMAT = HTML-CSS + +# When MathJax is enabled you need to specify the location relative to the HTML +# output directory using the MATHJAX_RELPATH option. The destination directory +# should contain the MathJax.js script. For instance, if the mathjax directory +# is located at the same level as the HTML output directory, then +# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax +# Content Delivery Network so you can quickly see the result without installing +# MathJax. However, it is strongly recommended to install a local copy of +# MathJax from http://www.mathjax.org before deployment. +# The default value is: http://cdn.mathjax.org/mathjax/latest. +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest + +# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax +# extension names that should be enabled during MathJax rendering. For example +# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_EXTENSIONS = + +# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces +# of code that will be used on startup of the MathJax code. See the MathJax site +# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an +# example see the documentation. +# This tag requires that the tag USE_MATHJAX is set to YES. + +MATHJAX_CODEFILE = + +# When the SEARCHENGINE tag is enabled doxygen will generate a search box for +# the HTML output. The underlying search engine uses javascript and DHTML and +# should work on any modern browser. Note that when using HTML help +# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET) +# there is already a search function so this one should typically be disabled. +# For large projects the javascript based search engine can be slow, then +# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to +# search using the keyboard; to jump to the search box use + S +# (what the is depends on the OS and browser, but it is typically +# , /