Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible Bug with Maximization in Lower Level #182

Closed
LukasBarner opened this issue Aug 4, 2022 · 4 comments · Fixed by #188
Closed

Possible Bug with Maximization in Lower Level #182

LukasBarner opened this issue Aug 4, 2022 · 4 comments · Fixed by #188

Comments

@LukasBarner
Copy link

LukasBarner commented Aug 4, 2022

Hi all,
great package, it really makes life a lot easier!
I noticed some strange behavior when the lower level problem is a maximization. See attached a MWE illustrating this, it is based on: Siddiqui S, Gabriel SA (2013). An sos1-based approach for solving mpecs with a natural gas market applica-
tion. Networks and Spatial Economics 13(2):205–227.
Since, in the original publication there are two lower level players, I reformulated both lower level problems into a single one (with equivalent KKTs).
When minimizing the negative of the lower objective, results are as expected. However running the same code but switching to maximizing the actual objective gives wrong results.

The MWE is:

using JuMP, BilevelJuMP, Ipopt

F = [1,2]
c = Dict(1=>1, 2=>1)
C = 1
a = 13
b = 1

model = BilevelModel(Ipopt.Optimizer, mode = BilevelJuMP.ProductMode())

@variable(Lower(model), q[F] >= 0)
@variable(Upper(model), Q >= 0)

@objective(Upper(model), Max, ((a-b * (q[1] + q[2] + Q)) * Q - C*Q) )

@objective(Lower(model), Min, -((a-b * (q[1] + q[2] + Q)) * q[1] - C*q[1] + (a-b * (q[1] + q[2] + Q)) * q[2] - C*q[2] + b*q[1]*q[2]) )

optimize!(model)

@assert isapprox(value.(Q), 6; atol=1e-6)
@assert isapprox(value.(q).data, [2,2]; atol=1e-6) 

Edit: In an earlier version, there has been a copy/paste issue, this is now corrected

@joaquimg
Copy link
Owner

Can you try with a MIP solver and a MIP mode (sos, fortuny amat, indicator)?
Ipopt is a local solver só it might reach suboptimal solutions

@LukasBarner
Copy link
Author

I can also reproduce this behavior with Gurobi and SOS1Mode().
See the code below. Indeed assertion fails for the lower level maximization problem, but works for a minimization of the negative objective.

using JuMP, BilevelJuMP, Gurobi

F = [1,2]
c = Dict(1=>1, 2=>1)
C = 1
a = 13
b = 1

model = BilevelModel(Gurobi.Optimizer, mode = BilevelJuMP.SOS1Mode())

@variable(Lower(model), q[F] >= 0)
@variable(Upper(model), Q >= 0)

@objective(Upper(model), Max, ((a-b * (q[1] + q[2] + Q)) * Q - C*Q) )

@objective(Lower(model), Max, ((a-b * (q[1] + q[2] + Q)) * q[1] - C*q[1] + (a-b * (q[1] + q[2] + Q)) * q[2] - C*q[2] + b*q[1]*q[2]) )
set_optimizer_attribute(model, "NonConvex", 2)

optimize!(model)

@assert isapprox(value.(Q), 6; atol=1e-3)
@assert isapprox(value.(q).data, [2,2]; atol=1e-3) 

@LukasBarner
Copy link
Author

Managed to take a look at this now. For both Problems, the KKTs stated in BilevelJuMP are actually different.
I think this is related to a bug in Dualization.jl: jump-dev/Dualization.jl#142

@joaquimg
Copy link
Owner

Thanks! I will take a look.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants