Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Nonlinear] add new nonlinear interface #3106

Merged
merged 23 commits into from
Aug 31, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 31 additions & 31 deletions docs/src/manual/nonlinear.md
Original file line number Diff line number Diff line change
Expand Up @@ -266,7 +266,7 @@ julia> model = Model();
julia> @variable(model, x);

julia> expr = op_ifelse(
op_or(op_strictly_less_than(x, -1), op_greater_than(x, 1)),
op_or(op_strictly_less_than(x, -1), op_greater_than_or_equal_to(x, 1)),
x^2,
0.0,
)
Expand All @@ -280,8 +280,8 @@ The available functions are:
| [`op_ifelse`](@ref) | `ifelse` |
| [`op_and`](@ref) | `&&` |
| [`op_or`](@ref) | `\|\|` |
| [`op_greater_than`](@ref) | `>=` |
| [`op_less_than`](@ref) | `<=` |
| [`op_greater_than_or_equal_to`](@ref) | `>=` |
odow marked this conversation as resolved.
Show resolved Hide resolved
| [`op_less_than_or_equal_to`](@ref) | `<=` |
| [`op_equal_to`](@ref) | `==` |
| [`op_strictly_greater_than`](@ref) | `>` |
| [`op_strictly_less_than`](@ref) | `<` |
Expand Down Expand Up @@ -315,62 +315,62 @@ Julia operators.
User-defined operators must return a scalar output. For a work-around, see
[User-defined operators with vector outputs](@ref).

### Register an operator
### Add an operator

Register a user-defined operator using the [`@register`](@ref) macro:
Add a user-defined operator using the [`@operator`](@ref) macro:

```@repl
using JuMP
square(x) = x^2
f(x, y) = (x - 1)^2 + (y - 2)^2
model = Model();
@register(model, op_square, 1, square)
@register(model, op_f, 2, f)
@operator(model, op_square, 1, square)
@operator(model, op_f, 2, f)
@variable(model, x[1:2]);
@objective(model, Min, op_f(x[1], op_square(x[2])))
```

The arguments to [`@register`](@ref) are:
The arguments to [`@operator`](@ref) are:

1. The model in which the function is registered.
1. The model to which the operator is added.
2. A Julia symbol object which serves as the name of the user-defined operator
in JuMP expressions. This name must not be the same as that of the function.
3. The number of scalar input arguments that the function takes.
4. A Julia method which computes the function.

!!! warning
User-defined operators cannot be re-registered or deleted.
User-defined operators cannot be deleted.

You can obtain a reference to the operator using the `model[:key]` syntax:

```@repl
using JuMP
square(x) = x^2
model = Model();
@register(model, op_square, 1, square)
@operator(model, op_square, 1, square)
op_square_2 = model[:op_square]
```

### Registered operators without macros
### Add an operator without macros

The [`@register`](@ref) macro is syntactic sugar for the
[`register_nonlinear_operator`](@ref) method. Thus, the non-macro version of the
The [`@operator`](@ref) macro is syntactic sugar for the
[`add_nonlinear_operator`](@ref) method. Thus, the non-macro version of the
preceding example is:

```@repl
using JuMP
square(x) = x^2
f(x, y) = (x - 1)^2 + (y - 2)^2
model = Model();
op_square = register_nonlinear_operator(model, 1, square; name = :op_square)
op_square = add_nonlinear_operator(model, 1, square; name = :op_square)
model[:op_square] = op_square
op_f = register_nonlinear_operator(model, 2, f; name = :op_f)
op_f = add_nonlinear_operator(model, 2, f; name = :op_f)
model[:op_f] = op_f
@variable(model, x[1:2]);
@objective(model, Min, op_f(x[1], op_square(x[2])))
```

### Registering with the same name as an existing function
### Operators with the same name as an existing function

A common error encountered is the following:
```jldoctest nonlinear_invalid_redefinition
Expand All @@ -381,31 +381,31 @@ julia> model = Model();
julia> f(x) = x^2
f (generic function with 1 method)

julia> @register(model, f, 1, f)
ERROR: Unable to register the nonlinear operator `:f` with the same name as
julia> @operator(model, f, 1, f)
ERROR: Unable to add the nonlinear operator `:f` with the same name as
an existing function.
[...]
```
This error occurs because `@register(model, f, 1, f)` is equivalent to:
This error occurs because `@operator(model, f, 1, f)` is equivalent to:
```julia
julia> f = register_nonlinear_operator(model, 1, f; name = :f)
julia> f = add_nonlinear_operator(model, 1, f; name = :f)
```
but `f` already exists as a Julia function.

If you evaluate the function without registering it, JuMP will trace the
function using operator overloading:
If you evaluate the function without adding it as an operator, JuMP will trace
the function using operator overloading:
```jldoctest nonlinear_invalid_redefinition
julia> @variable(model, x);

julia> f(x)
```

To force JuMP to treat `f` as a user-defined operator and not trace it, register
the function using [`register_nonlinear_operator`](@ref) and define a new method
To force JuMP to treat `f` as a user-defined operator and not trace it, add
the operator using [`add_nonlinear_operator`](@ref) and define a new method
which manually creates a [`NonlinearExpr`](@ref):
```jldoctest nonlinear_invalid_redefinition
julia> _ = register_nonlinear_operator(model, 1, f; name = :f)
julia> _ = add_nonlinear_operator(model, 1, f; name = :f)
NonlinearOperator(:f, f)

julia> f(x::AbstractJuMPScalar) = NonlinearExpr(:f, Any[x])
odow marked this conversation as resolved.
Show resolved Hide resolved
Expand All @@ -415,12 +415,12 @@ julia> @expression(model, log(f(x)))
log(f(x))
```

### Register gradients and Hessians
### Gradients and Hessians

By default, JuMP will use automatic differentiation to compute the gradient and
Hessian of user-defined operators. If your function is not amenable to
automatic differentiation, or you can compute analytic derivatives, you may pass
additional arguments to [`@register`](@ref) to compute the first- and
additional arguments to [`@operator`](@ref) to compute the first- and
second-derivatives.

#### Univariate functions
Expand All @@ -434,7 +434,7 @@ f(x) = x^2
∇f(x) = 2x
∇²f(x) = 2
model = Model();
@register(model, op_f, 1, f, ∇f, ∇²f) # Providing ∇²f is optional
@operator(model, op_f, 1, f, ∇f, ∇²f) # Providing ∇²f is optional
@variable(model, x)
@objective(model, Min, op_f(x))
```
Expand All @@ -461,7 +461,7 @@ function ∇²f(H::AbstractMatrix{T}, x::T...) where {T}
return
end
model = Model();
@register(model, rosenbrock, 2, f, ∇f, ∇²f) # Providing ∇²f is optional
@operator(model, rosenbrock, 2, f, ∇f, ∇²f) # Providing ∇²f is optional
@variable(model, x[1:2])
@objective(model, Min, rosenbrock(x[1], x[2]))
```
Expand Down Expand Up @@ -494,7 +494,7 @@ using JuMP
model = Model();
@variable(model, x[1:5])
f(x::Vector) = sum(x[i]^i for i in 1:length(x))
@register(model, op_f, 5, (x...) -> f(collect(x)))
@operator(model, op_f, 5, (x...) -> f(collect(x)))
@objective(model, Min, op_f(x...))
```

Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/applications/power_systems.jl
Original file line number Diff line number Diff line change
Expand Up @@ -513,7 +513,7 @@ function solve_nonlinear_economic_dispatch(
if silent
set_silent(model)
end
@register(model, op_tcf, 1, thermal_cost_function)
@operator(model, op_tcf, 1, thermal_cost_function)
N = length(generators)
@variable(model, generators[i].min <= g[i = 1:N] <= generators[i].max)
@variable(model, 0 <= w <= scenario.wind)
Expand Down
4 changes: 2 additions & 2 deletions docs/src/tutorials/nonlinear/nested_problems.jl
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ end

model = Model(Ipopt.Optimizer)
@variable(model, x[1:2] >= 0)
@register(model, op_V, 2, V, ∇V, ∇²V)
@operator(model, op_V, 2, V, ∇V, ∇²V)
@objective(model, Min, x[1]^2 + x[2]^2 + op_V(x[1], x[2]))
optimize!(model)
solution_summary(model)
Expand Down Expand Up @@ -214,7 +214,7 @@ end
model = Model(Ipopt.Optimizer)
@variable(model, x[1:2] >= 0)
cache = Cache(Float64[], NaN, Float64[])
@register(
@operator(
model,
op_cached_f,
2,
Expand Down
8 changes: 4 additions & 4 deletions docs/src/tutorials/nonlinear/tips_and_tricks.jl
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,8 @@ foo_2(x, y) = foo(x, y)[2]
model = Model(Ipopt.Optimizer)
set_silent(model)
@variable(model, x[1:2] >= 0, start = 0.1)
@register(model, op_foo_1, 2, foo_1)
@register(model, op_foo_2, 2, foo_2)
@operator(model, op_foo_1, 2, foo_1)
@operator(model, op_foo_2, 2, foo_2)
@objective(model, Max, op_foo_1(x[1], x[2]))
@constraint(model, op_foo_2(x[1], x[2]) <= 2)
function_calls = 0
Expand Down Expand Up @@ -114,8 +114,8 @@ println("function_calls = ", function_calls)
model = Model(Ipopt.Optimizer)
set_silent(model)
@variable(model, x[1:2] >= 0, start = 0.1)
@register(model, op_foo_1, 2, memoized_foo[1])
@register(model, op_foo_2, 2, memoized_foo[2])
@operator(model, op_foo_1, 2, memoized_foo[1])
@operator(model, op_foo_2, 2, memoized_foo[2])
@objective(model, Max, op_foo_1(x[1], x[2]))
@constraint(model, op_foo_2(x[1], x[2]) <= 2)
function_calls = 0
Expand Down
4 changes: 2 additions & 2 deletions docs/src/tutorials/nonlinear/user_defined_hessians.jl
Original file line number Diff line number Diff line change
Expand Up @@ -65,11 +65,11 @@ end
# you may assume only that `H` supports `size(H)` and `setindex!`.

# Now that we have the function, its gradient, and its Hessian, we can construct
# a JuMP model, register the function, and use it in a macro:
# a JuMP model, add the operator, and use it in a macro:

model = Model(Ipopt.Optimizer)
@variable(model, x[1:2])
@register(model, op_rosenbrock, 2, rosenbrock, ∇rosenbrock, ∇²rosenbrock)
@operator(model, op_rosenbrock, 2, rosenbrock, ∇rosenbrock, ∇²rosenbrock)
@objective(model, Min, op_rosenbrock(x[1], x[2]))
optimize!(model)
solution_summary(model; verbose = true)
20 changes: 10 additions & 10 deletions src/macros.jl
Original file line number Diff line number Diff line change
Expand Up @@ -598,7 +598,7 @@ x > 2
const op_strictly_greater_than = NonlinearOperator(:>, >)

"""
op_less_than(x, y)
op_less_than_or_equal_to(x, y)

A function that falls back to `x <= y`, but when called with JuMP variables or
expressions, returns a [`GenericNonlinearExpr`](@ref).
Expand All @@ -610,17 +610,17 @@ julia> model = Model();

julia> @variable(model, x);

julia> op_less_than(2, 2)
julia> op_less_than_or_equal_to(2, 2)
true

julia> op_less_than(x, 2)
julia> op_less_than_or_equal_to(x, 2)
x <= 2
```
"""
const op_less_than = NonlinearOperator(:<=, <=)
const op_less_than_or_equal_to = NonlinearOperator(:<=, <=)

"""
op_greater_than(x, y)
op_greater_than_or_equal_to(x, y)

A function that falls back to `x >= y`, but when called with JuMP variables or
expressions, returns a [`GenericNonlinearExpr`](@ref).
Expand All @@ -632,14 +632,14 @@ julia> model = Model();

julia> @variable(model, x);

julia> op_greater_than(2, 2)
julia> op_greater_than_or_equal_to(2, 2)
true

julia> op_greater_than(x, 2)
julia> op_greater_than_or_equal_to(x, 2)
x >= 2
```
"""
const op_greater_than = NonlinearOperator(:>=, >=)
const op_greater_than_or_equal_to = NonlinearOperator(:>=, >=)

"""
op_equal_to(x, y)
Expand Down Expand Up @@ -672,9 +672,9 @@ function _rewrite_to_jump_logic(x)
elseif x.args[1] == :>
return Expr(:call, op_strictly_greater_than, x.args[2:end]...)
elseif x.args[1] == :<=
return Expr(:call, op_less_than, x.args[2:end]...)
return Expr(:call, op_less_than_or_equal_to, x.args[2:end]...)
elseif x.args[1] == :>=
return Expr(:call, op_greater_than, x.args[2:end]...)
return Expr(:call, op_greater_than_or_equal_to, x.args[2:end]...)
elseif x.args[1] == :(==)
return Expr(:call, op_equal_to, x.args[2:end]...)
end
Expand Down
Loading