Skip to content

Commit

Permalink
Update text
Browse files Browse the repository at this point in the history
  • Loading branch information
pedromxavier committed Apr 20, 2023
1 parent 2940896 commit cad8e12
Showing 1 changed file with 14 additions and 15 deletions.
29 changes: 14 additions & 15 deletions docs/src/booklet/4-encoding.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Encoding Methods

## Variable Encoding
## Variables

As you should already know, QUBO models are comprised only of binary variables.
As you may already know, QUBO models are comprised only of binary variables.
So when we are reformulating general optimization problems, one important step is to encode variables into binary ones.

`ToQUBO` currently implements 6 encoding techniques.
Expand All @@ -11,30 +11,28 @@ Also, they differ in the magnitude of their coefficients ``\Delta``.

| Encoding | Binary Variables | # Linear terms | # Quadratic terms | ``\Delta`` |
|:---------------------:|:------------------:|:----------------:|:-------------------:|:----------------:|
| Binary | ``O(log \ n)`` | ``O(log \ n)`` | - | ``O(n)`` |
| Binary | ``O(\log n)`` | ``O(\log n)`` | - | ``O(n)`` |
| Unary | ``O(n)`` | ``O(n)`` | - | ``O(1)`` |
| One-Hot | ``O(n)`` | ``O(n)`` | ``O(n^2)`` | ``O(n)`` |
| Domain-Wall | ``O(n)`` | ``O(n)`` | ``O(n)`` | ``O(n)`` |
| Bounded-Coefficient | ``O(n)`` | ``O(n)`` | - | ``O(1)`` |
| Arithmetic Prog | ``O(\sqrt{n})`` | ``O(\sqrt{n})`` | - | ``O(\sqrt{n})`` |


### Linear Encoding Methods
### Linear Encoding
```@docs
ToQUBO.Mirror
```

```@docs
ToQUBO.LinearEncoding
ToQUBO.Binary
ToQUBO.Unary
ToQUBO.Arithmetic
ToQUBO.OneHot
```

### Sequential Encoding Methods
```@docs
ToQUBO.SequentialEncoding
ToQUBO.Mirror
```

### Sequential Encoding
```@docs
ToQUBO.DomainWall
```

Expand All @@ -43,13 +41,14 @@ ToQUBO.DomainWall
ToQUBO.Bounded
```

## Constraint Encoding
## Constraints

A QUBO model is unconstrained. So when `ToQUBO` is reformulating a problem, it needs to encode all constraints into the objective function loosing as little information as possible.
A QUBO model is unconstrained. So when `ToQUBO` is reformulating a problem, it needs to encode all constraints into the objective function losing as little information as possible.

As constraints are introduced into the objective function, we need to make sure that they won't be violated. In order to do that, `ToQUBO` multiplies the encoded constraint by a large penalty $\rho$, so that any violation would result in an infeasible solution to the problem.
As constraints are introduced into the objective function, we need to make sure that they won't be violated.
In order to do that, `ToQUBO` multiplies the encoded constraint by a large penalty ``\rho``, so that any violation would result in a sub-optimal solution to the problem.

Sometimes, moving a constraint to the objective fuction might introduce higher-order terms (degree > 2). If that is the case, `ToQUBO` needs to reduce it back to a quadratic function.
Sometimes, the encoding process might introduce higher-order terms, demanding `ToQUBO` to reduce the offending polynomials back to a quadratic form.

As of today, `ToQUBO` provides encoding for the following constraints:

Expand Down

0 comments on commit cad8e12

Please sign in to comment.