diff --git a/docs/src/booklet/4-encoding.md b/docs/src/booklet/4-encoding.md index 7b08fc7c..0bdf08c3 100644 --- a/docs/src/booklet/4-encoding.md +++ b/docs/src/booklet/4-encoding.md @@ -1,8 +1,8 @@ # Encoding Methods -## Variable Encoding +## Variables -As you should already know, QUBO models are comprised only of binary variables. +As you may already know, QUBO models are comprised only of binary variables. So when we are reformulating general optimization problems, one important step is to encode variables into binary ones. `ToQUBO` currently implements 6 encoding techniques. @@ -11,7 +11,7 @@ Also, they differ in the magnitude of their coefficients ``\Delta``. | Encoding | Binary Variables | # Linear terms | # Quadratic terms | ``\Delta`` | |:---------------------:|:------------------:|:----------------:|:-------------------:|:----------------:| -| Binary | ``O(log \ n)`` | ``O(log \ n)`` | - | ``O(n)`` | +| Binary | ``O(\log n)`` | ``O(\log n)`` | - | ``O(n)`` | | Unary | ``O(n)`` | ``O(n)`` | - | ``O(1)`` | | One-Hot | ``O(n)`` | ``O(n)`` | ``O(n^2)`` | ``O(n)`` | | Domain-Wall | ``O(n)`` | ``O(n)`` | ``O(n)`` | ``O(n)`` | @@ -19,22 +19,20 @@ Also, they differ in the magnitude of their coefficients ``\Delta``. | Arithmetic Prog | ``O(\sqrt{n})`` | ``O(\sqrt{n})`` | - | ``O(\sqrt{n})`` | -### Linear Encoding Methods +### Linear Encoding ```@docs -ToQUBO.Mirror -``` - -```@docs -ToQUBO.LinearEncoding ToQUBO.Binary ToQUBO.Unary ToQUBO.Arithmetic ToQUBO.OneHot ``` -### Sequential Encoding Methods ```@docs -ToQUBO.SequentialEncoding +ToQUBO.Mirror +``` + +### Sequential Encoding +```@docs ToQUBO.DomainWall ``` @@ -43,13 +41,14 @@ ToQUBO.DomainWall ToQUBO.Bounded ``` -## Constraint Encoding +## Constraints -A QUBO model is unconstrained. So when `ToQUBO` is reformulating a problem, it needs to encode all constraints into the objective function loosing as little information as possible. +A QUBO model is unconstrained. So when `ToQUBO` is reformulating a problem, it needs to encode all constraints into the objective function losing as little information as possible. -As constraints are introduced into the objective function, we need to make sure that they won't be violated. In order to do that, `ToQUBO` multiplies the encoded constraint by a large penalty $\rho$, so that any violation would result in an infeasible solution to the problem. +As constraints are introduced into the objective function, we need to make sure that they won't be violated. +In order to do that, `ToQUBO` multiplies the encoded constraint by a large penalty ``\rho``, so that any violation would result in a sub-optimal solution to the problem. -Sometimes, moving a constraint to the objective fuction might introduce higher-order terms (degree > 2). If that is the case, `ToQUBO` needs to reduce it back to a quadratic function. +Sometimes, the encoding process might introduce higher-order terms, demanding `ToQUBO` to reduce the offending polynomials back to a quadratic form. As of today, `ToQUBO` provides encoding for the following constraints: