Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] add Gurobi to the docs and remove GLPK #3904

Merged
merged 6 commits into from
Dec 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .github/workflows/documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,16 @@ jobs:
with:
version: '1'
- name: Install dependencies
env:
WLSLICENSE: ${{ secrets.WLSLICENSE }}
shell: julia --color=yes --project=docs/ {0}
run: |
using Pkg
Pkg.develop(PackageSpec(path=pwd()))
Pkg.instantiate()
- name: Build and deploy
env:
WLSLICENSE: ${{ secrets.WLSLICENSE }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # For authentication with GitHub Actions token
DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }} # For authentication with SSH deploy key
DOCUMENTER_LATEX_DEBUG: ${{ github.workspace }}/latex-debug-logs
Expand All @@ -31,7 +34,7 @@ jobs:
if: ${{ always() }}
with:
name: PDF build logs
path: ${{ github.workspace }}/latex-debug-logs
path: ${{ github.workspace }}/latex-debug-logs
- uses: errata-ai/vale-action@reviewdog
with:
version: 3.3.1
Expand Down
4 changes: 2 additions & 2 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Downloads = "f43a241f-c20a-4ad4-852c-f6b1247861c6"
Dualization = "191a621a-6537-11e9-281d-650236a99e60"
Enzyme = "7da242da-08ed-463a-9acd-ee780be4f1d9"
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
GLPK = "60bf3e95-4087-53dc-ae20-288a0d20c6a6"
Gurobi = "2e9cd046-0924-5485-92f1-d5272153d98b"
HTTP = "cd3eb016-35fb-5094-929b-558a96fad6f3"
HiGHS = "87dc4568-4c63-4d18-b0c0-bb2238e4078b"
Images = "916415d5-f1e6-5110-898d-aaa5f9f070e0"
Expand Down Expand Up @@ -60,7 +60,7 @@ DocumenterCitations = "1"
Dualization = "0.5"
Enzyme = "0.13.7"
ForwardDiff = "0.10"
GLPK = "=1.2.1"
Gurobi = "1"
HTTP = "1.5.4"
HiGHS = "=1.12.0"
Images = "0.26.1"
Expand Down
25 changes: 11 additions & 14 deletions docs/src/manual/callbacks.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,9 +89,7 @@ information about lazy constraints, see this [blog post by Paul Rubin](https://o
A lazy constraint callback can be set using the following syntax:

```jldoctest
julia> import GLPK

julia> model = Model(GLPK.Optimizer);
julia> model = Model();

julia> @variable(model, x <= 10, Int)
x
Expand All @@ -117,6 +115,7 @@ julia> function my_callback_function(cb_data)
con = @build_constraint(x <= 2)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
end
return
end
my_callback_function (generic function with 1 method)

Expand All @@ -134,19 +133,21 @@ julia> set_attribute(model, MOI.LazyConstraintCallback(), my_callback_function)
solver returning an incorrect solution, or lead to many constraints being
added, slowing down the solution process.
```julia
model = Model(GLPK.Optimizer)
model = Model()
@variable(model, x <= 10, Int)
@objective(model, Max, x)
function bad_callback_function(cb_data)
# Don't do this!
con = @build_constraint(x <= 2)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
return
end
function good_callback_function(cb_data)
if callback_value(x) > 2
con = @build_constraint(x <= 2)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
end
return
end
set_attribute(model, MOI.LazyConstraintCallback(), good_callback_function)
```
Expand All @@ -170,9 +171,7 @@ aforementioned [blog post](https://orinanobworld.blogspot.com/2012/08/user-cuts-
A user-cut callback can be set using the following syntax:

```jldoctest
julia> import GLPK

julia> model = Model(GLPK.Optimizer);
julia> model = Model();

julia> @variable(model, x <= 10.5, Int)
x
Expand All @@ -184,6 +183,7 @@ julia> function my_callback_function(cb_data)
x_val = callback_value(cb_data, x)
con = @build_constraint(x <= floor(x_val))
MOI.submit(model, MOI.UserCut(cb_data), con)
return
end
my_callback_function (generic function with 1 method)

Expand Down Expand Up @@ -221,22 +221,19 @@ solutions from them.
A heuristic solution callback can be set using the following syntax:

```jldoctest
julia> import GLPK
julia> model = Model();

julia> model = Model(GLPK.Optimizer);
julia> @variable(model, x <= 10.5, Int);

julia> @variable(model, x <= 10.5, Int)
x

julia> @objective(model, Max, x)
x
julia> @objective(model, Max, x);

julia> function my_callback_function(cb_data)
x_val = callback_value(cb_data, x)
status = MOI.submit(
model, MOI.HeuristicSolution(cb_data), [x], [floor(Int, x_val)]
)
println("I submitted a heuristic solution, and the status was: ", status)
return
end
my_callback_function (generic function with 1 method)

Expand Down
8 changes: 4 additions & 4 deletions docs/src/tutorials/algorithms/benders_decomposition.jl
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
# in JuMP. It uses the following packages:

using JuMP
import GLPK
import Gurobi
import HiGHS
import Printf
import Test #src
Expand Down Expand Up @@ -283,13 +283,13 @@ objective_value(model)
# node of the first-stage MILP at each iteration.

# !!! tip
# We use GLPK for this model because HiGHS does not support lazy constraints.
# For more information on callbacks, read the page
# We use Gurobi for this model because HiGHS does not support lazy
# constraints. For more information on callbacks, read the page
# [Solver-independent callbacks](@ref callbacks_manual).

# As before, we construct the same first-stage subproblem:

lazy_model = Model(GLPK.Optimizer)
lazy_model = Model(Gurobi.Optimizer)
set_silent(lazy_model)
@variable(lazy_model, x[1:n, 1:n], Bin)
@variable(lazy_model, θ >= M)
Expand Down
22 changes: 12 additions & 10 deletions docs/src/tutorials/algorithms/tsp_lazy_constraints.jl
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,10 @@
# It uses the following packages:

using JuMP
import GLPK
import Random
import Gurobi
import Plots
import Random
import Test

# ## [Mathematical Formulation](@id tsp_model)

Expand Down Expand Up @@ -124,7 +125,7 @@ function generate_distance_matrix(n; random_seed = 1)
return X, Y, d
end

n = 20
n = 100
X, Y, d = generate_distance_matrix(n)

# For the JuMP model, we first initialize the model object. Then, we create the
Expand All @@ -133,7 +134,8 @@ X, Y, d = generate_distance_matrix(n)
# constraints that `x[i, j] == x[j, i]`.

function build_tsp_model(d, n)
model = Model(GLPK.Optimizer)
model = Model(Gurobi.Optimizer)
set_silent(model)
@variable(model, x[1:n, 1:n], Bin, Symmetric)
@objective(model, Min, sum(d .* x) / 2)
@constraint(model, [i in 1:n], sum(x[i, :]) == 2)
Expand Down Expand Up @@ -271,14 +273,14 @@ optimize!(lazy_model)
@assert is_solved_and_feasible(lazy_model)
objective_value(lazy_model)

#-

time_lazy = solve_time(lazy_model)

# This finds the same optimal tour:

plot_tour(X, Y, value.(lazy_model[:x]))

# Surprisingly, for this particular model with GLPK, the solution time is worse
# than the iterative method:

time_lazy = solve_time(lazy_model)
# The solution time is faster than the iterative approach:

# In most other cases and solvers, however, the lazy time should be faster than
# the iterative method.
Test.@test time_lazy < time_iterated
68 changes: 47 additions & 21 deletions docs/src/tutorials/linear/callbacks.jl
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# The tutorial uses the following packages:

using JuMP
import GLPK
import Gurobi
import Random
import Test

Expand All @@ -29,7 +29,8 @@ import Test
# An example using a lazy constraint callback.

function example_lazy_constraint()
model = Model(GLPK.Optimizer)
model = Model(Gurobi.Optimizer)
set_silent(model)
@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
Expand Down Expand Up @@ -57,6 +58,7 @@ function example_lazy_constraint()
println("Adding $(con)")
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
end
return
end
set_attribute(model, MOI.LazyConstraintCallback(), my_callback_function)
optimize!(model)
Expand All @@ -78,7 +80,11 @@ function example_user_cut_constraint()
Random.seed!(1)
N = 30
item_weights, item_values = rand(N), rand(N)
model = Model(GLPK.Optimizer)
model = Model(Gurobi.Optimizer)
set_silent(model)
## Turn off "Cuts" parameter so that our new one must be called. In real
## models, you should leave "Cuts" turned on.
set_attribute(model, "Cuts", 0)
@variable(model, x[1:N], Bin)
@constraint(model, sum(item_weights[i] * x[i] for i in 1:N) <= 10)
@objective(model, Max, sum(item_values[i] * x[i] for i in 1:N))
Expand Down Expand Up @@ -115,7 +121,11 @@ function example_heuristic_solution()
Random.seed!(1)
N = 30
item_weights, item_values = rand(N), rand(N)
model = Model(GLPK.Optimizer)
model = Model(Gurobi.Optimizer)
set_silent(model)
## Turn off "Heuristics" parameter so that our new one must be called. In
## real models, you should leave "Heuristics" turned on.
set_attribute(model, "Heuristics", 0)
@variable(model, x[1:N], Bin)
@constraint(model, sum(item_weights[i] * x[i] for i in 1:N) <= 10)
@objective(model, Max, sum(item_values[i] * x[i] for i in 1:N))
Expand All @@ -140,41 +150,57 @@ end

example_heuristic_solution()

# ## GLPK solver-dependent callback
# ## Gurobi solver-dependent callback

# An example using GLPK's solver-dependent callback.
# An example using Gurobi's solver-dependent callback.

function example_solver_dependent_callback()
model = Model(GLPK.Optimizer)
model = direct_model(Gurobi.Optimizer())
@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
lazy_called = false
function my_callback_function(cb_data)
lazy_called = true
reason = GLPK.glp_ios_reason(cb_data.tree)
println("Called from reason = $(reason)")
if reason != GLPK.GLP_IROWGEN
cb_calls = Cint[]
function my_callback_function(cb_data, cb_where::Cint)
## You can reference variables outside the function as normal
push!(cb_calls, cb_where)
## You can select where the callback is run
if cb_where == Gurobi.GRB_CB_MIPNODE
## You can query a callback attribute using GRBcbget
resultP = Ref{Cint}()
Gurobi.GRBcbget(
cb_data,
cb_where,
Gurobi.GRB_CB_MIPNODE_STATUS,
resultP,
)
if resultP[] != Gurobi.GRB_OPTIMAL
return # Solution is something other than optimal.
end
elseif cb_where != Gurobi.GRB_CB_MIPSOL
return
end
## Before querying `callback_value`, you must call:
Gurobi.load_callback_variable_primal(cb_data, cb_where)
x_val = callback_value(cb_data, x)
y_val = callback_value(cb_data, y)
## You can submit solver-independent MathOptInterface attributes such as
## lazy constraints, user-cuts, and heuristic solutions.
if y_val - x_val > 1 + 1e-6
con = @build_constraint(y - x <= 1)
println("Adding $(con)")
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
elseif y_val + x_val > 3 + 1e-6
con = @build_constraint(y - x <= 1)
println("Adding $(con)")
con = @build_constraint(y + x <= 3)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
end
## You can terminate the callback as follows:
Gurobi.GRBterminate(backend(model))
return
end
set_attribute(model, GLPK.CallbackFunction(), my_callback_function)
## You _must_ set this parameter if using lazy constraints.
set_attribute(model, "LazyConstraints", 1)
set_attribute(model, Gurobi.CallbackFunction(), my_callback_function)
optimize!(model)
Test.@test is_solved_and_feasible(model)
Test.@test lazy_called
Test.@test value(x) == 1
Test.@test value(y) == 2
Test.@test termination_status(model) == MOI.INTERRUPTED
return
end

Expand Down
Loading
Loading