Skip to content

Commit

Permalink
Add instructions to create a new test set
Browse files Browse the repository at this point in the history
  • Loading branch information
stephane-caron committed Dec 13, 2023
1 parent 2b7e3e3 commit f8a7d55
Show file tree
Hide file tree
Showing 2 changed files with 35 additions and 28 deletions.
9 changes: 9 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,12 @@ This project's goal is to facilitate the comparison of quadratic programming sol
- Set the solver's absolute tolerance in the `set_eps_abs` function in `solver_settings.py`
- Set the solver's relative tolerance in the `set_eps_rel` function in `solver_settings.py`
- Set the solver's time limit (if applicable) in the `set_time_limit` function in `solver_settings.py`

## Creating a new test set

- Create a new repository for your test set
- Add a main Python file named after the test set, say ``foo_bar.py``
- Implement a `FooBar` class in this file deriving from `qpbenchmark.TestCase`
- The class name should match the file name in PascalCase)

Check out how this is done in *e.g.* the [Maros-Meszaros test set](https://github.com/qpsolvers/maros_meszaros_qpbenchmark).
54 changes: 26 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,34 +8,15 @@ Benchmark for quadratic programming (QP) solvers available in Python.

The objective is to compare and select the best QP solvers for given use cases. The benchmarking methodology is open to [discussions](https://github.com/qpsolvers/qpbenchmark/discussions). Standard and community [test sets](#test-sets) are available: all of them can be processed using the ``qpbenchmark`` command-line tool, resulting in standardized reports evaluating all [metrics](#metrics) across all QP solvers available on the test machine.

## Installation

The recommended process is to install the benchmark and all solvers in an isolated environment using ``conda``:

```console
conda env create -f environment.yaml
conda activate qpbenchmark
```

Alternatively, you can install the benchmarking tool individually using ``pip``:

```console
pip install qpbenchmark
```

In that case, the benchmark will run all supported solvers it finds. (Quick way to install open source solvers from PyPI: ``pip install qpsolvers[open_source_solvers]``.)

## Test sets

The benchmark comes with standard and community test sets to represent different use cases for QP solvers:

| Test set | Problems | Brief description |
| ------------------------ | -------- | ----------------- |
| **Maros-Meszaros** | 138 | Standard, designed to be difficult. |
| **Maros-Meszaros dense** | 62 | Subset of Maros-Meszaros restricted to smaller dense problems. |
| **GitHub free-for-all** | 12 | Community-built, new problems [are welcome](https://github.com/qpsolvers/qpbenchmark/issues/new?template=new_problem.md)! |
- **GitHub free-for-all**: community-built, new problems [are welcome](https://github.com/qpsolvers/qpbenchmark/issues/new?template=new_problem.md)!
- [Maros-Meszaros](https://github.com/qpsolvers/maros_meszaros_qpbenchmark): a standard test set with problems designed to be difficult.
- [Model predictive control](https://github.com/qpsolvers/mpc_qpbenchmark): model predictive control problems arising *e.g.* in robotics.

New test sets are welcome! The benchmark is designed so that each test set comes in a standalone directory. Check out the existing test sets below, and feel free to create a new one that better matches your particular use cases.
New test sets are welcome! The `qpbenchmark` tool is designed to make it easy to wrap up a new test set without re-implementing the benchmark methodology. Check out [creating a new test set](CONTRIBUTING.md).

## Solvers

Expand Down Expand Up @@ -100,18 +81,35 @@ Here are some known areas of improvement for this benchmark:

Check out the [issue tracker](https://github.com/qpsolvers/qpbenchmark/issues) for ongoing works and future improvements.

## Running the benchmark
## Installation

The recommended process is to install the benchmark and all solvers in an isolated environment using ``conda``:

```console
conda env create -f environment.yaml
conda activate qpbenchmark
```

Alternatively, you can install the benchmarking tool individually using ``pip``:

```console
pip install qpbenchmark
```

In that case, the benchmark will run all supported solvers it finds. (Quick way to install open source solvers from PyPI: ``pip install qpsolvers[open_source_solvers]``.)

## Usage

Once the benchmark is installed, you will be able to run the ``qpbenchmark`` command. Provide it with the script corresponding to the [test set](#test-sets) you want to run, followed by a benchmark command such as "run". For instance, let's run the "dense" subset of the Maros-Meszaros test set:
The benchmark works by running ``qpbenchmark`` on a Python script describing the test set. For instance:

```console
qpbenchmark maros_meszaros/maros_meszaros_dense.py run
qpbenchmark github_ffa/github_ffa.py run
```

You can also run a specific solver, problem or set of solver settings:
The test-set script is followed by a benchmark command, such as "run" here. We can add optional arguments to run a specific solver, problem, or solver settings:

```console
qpbenchmark maros_meszaros/maros_meszaros_dense.py run --solver proxqp --settings default
qpbenchmark github_ffa/github_ffa.py run --solver proxqp --settings default
```

Check out ``qpbenchmark --help`` for a list of available commands and arguments.
Expand Down

0 comments on commit f8a7d55

Please sign in to comment.