-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the solver encountered numerical issues #782
Comments
This problem is non-convex because it has You cannot use SDDP to find an optimal policy to this problem. But you can (sometimes) use it as a heuristic. Try Ipopt instead, which supports bilinear terms in the objective. julia> using SDDP
julia> using Ipopt
julia> mu = 0.00025
0.00025
julia> sigma = 0.00145
0.00145
julia> model = SDDP.LinearPolicyGraph(
stages = 10,
sense = :Max,
upper_bound = 1000.0,
optimizer = Ipopt.Optimizer,
) do subproblem, t
exp_t = 0.0
@variable(subproblem, P >= 0, SDDP.State, initial_value = 1)
@variable(subproblem, St >= 0, SDDP.State, initial_value = 300)
@variable(subproblem, s >= 0)
@constraints(subproblem, begin
con_st, St.out == St.in
P.in - s == P.out
end)
@stageobjective(subproblem, St.out * s)
if t > 1
Ω = [-1, 1]
P = [0.5, 0.5]
SDDP.parameterize(subproblem, Ω, P) do ω
k = exp(mu + ω * sigma)
set_normalized_coefficient(con_st, St.in, -1 * k)
end
end
end
A policy graph with 10 nodes.
Node indices: 1, ..., 10
julia> SDDP.train(model)
-------------------------------------------------------------------
SDDP.jl (c) Oscar Dowson and contributors, 2017-24
-------------------------------------------------------------------
problem
nodes : 10
state variables : 2
scenarios : 5.12000e+02
existing cuts : false
options
solver : serial mode
risk measure : SDDP.Expectation()
sampling scheme : SDDP.InSampleMonteCarlo
subproblem structure
VariableRef : [6, 6]
AffExpr in MOI.EqualTo{Float64} : [2, 2]
VariableRef in MOI.GreaterThan{Float64} : [3, 4]
VariableRef in MOI.LessThan{Float64} : [1, 1]
numerical stability report
matrix range [1e+00, 1e+00]
objective range [0e+00, 0e+00]
bounds range [1e+03, 1e+03]
rhs range [0e+00, 0e+00]
-------------------------------------------------------------------
iteration simulation bound time (s) solves pid
-------------------------------------------------------------------
1 3.000000e+02 3.015395e+02 1.576190e-01 29 1
2 2.993707e+02 3.008530e+02 5.274728e+00 1058 1
11 3.006286e+02 3.003308e+02 6.299024e+00 1319 1
20 3.008111e+02 3.002630e+02 7.414713e+00 1580 1
23 2.985037e+02 3.002630e+02 1.241855e+01 2667 1
41 3.010550e+02 3.001860e+02 1.988726e+01 4189 1
61 3.008860e+02 3.001850e+02 2.831844e+01 5769 1
81 3.004311e+02 3.001850e+02 3.603869e+01 7349 1
100 2.985037e+02 3.001850e+02 3.869577e+01 7900 1
-------------------------------------------------------------------
status : simulation_stopping
total time (s) : 3.869577e+01
total solves : 7900
best bound : 3.001850e+02
simulation ci : 3.002781e+02 ± 1.911726e-01
numeric issues : 0
------------------------------------------------------------------- |
Thank you, I successfully solved this problem by changing the solver。 |
No problem. Your model has identified an issue in HiGHS, which I will follow up with their developers: julia> using JuMP, HiGHS
julia> model = JuMP.read_from_file("subproblem_9.mof.json")
A JuMP Model
├ solver: none
├ objective_sense: MAX_SENSE
│ └ objective_function_type: QuadExpr
├ num_variables: 6
├ num_constraints: 9
│ ├ AffExpr in MOI.EqualTo{Float64}: 2
│ ├ AffExpr in MOI.LessThan{Float64}: 1
│ ├ VariableRef in MOI.EqualTo{Float64}: 2
│ ├ VariableRef in MOI.GreaterThan{Float64}: 3
│ └ VariableRef in MOI.LessThan{Float64}: 1
└ Names registered in the model: none
julia> set_optimizer(model, HiGHS.Optimizer)
julia> optimize!(model)
Running HiGHS 1.7.2 (git hash: 5ce7a2753): Copyright (c) 2024 HiGHS under MIT licence terms
Coefficient ranges:
Matrix [6e-05, 3e+02]
Cost [1e+00, 1e+00]
Bound [3e+02, 1e+03]
RHS [2e-02, 2e-02]
Iteration Objective NullspaceDim
0 0.0091790712 0 0.00s
4 0.0091790712 0 0.00s
ERROR: getKktFailures: Row 2 status-value error: [-inf; 0.0183407; 0.0182876] has residual 5.3111e-05
ERROR: QP solver claims optimality, but with num/max/sum primal(1/5.3111e-05/5.3111e-05)and dual(1/1/1) infeasibilities
Model status : Optimal
QP ASM iterations: 4
Objective value : -5.3111048675e-05
HiGHS run time : 0.00
julia> solution_summary(model)
* Solver : HiGHS
* Status
Result count : 1
Termination status : OPTIMAL
Message from the solver:
"kHighsModelStatusOptimal"
* Candidate solution (result #1)
Primal status : INFEASIBLE_POINT
Dual status : INFEASIBLE_POINT
Objective value : -5.31110e-05
Objective bound : -0.00000e+00
Relative gap : Inf
Dual objective value : -1.84113e-02
* Work counters
Solve time (sec) : 7.04541e-04
Simplex iterations : 0
Barrier iterations : 0
Node count : -1
julia> HiGHS.Highs_writeModel(unsafe_backend(model), "bug_sddp.mps")
Writing the model to bug_sddp.mps
WARNING: There are empty or excessively-long column names: using constructed names with prefix "c"
WARNING: There are empty or excessively-long row names: using constructed names with prefix "r"
1
shell> cat bug_sddp.mps
NAME
OBJSENSE
MAX
ROWS
N Obj
E r0
E r1
L r2
COLUMNS
c0 r1 1
c1 r1 -1
c1 r2 -302.4250644
c2 r0 -1.001701446
c3 r0 1
c3 r2 6.048507839e-05
c4 r1 -1
c5 Obj 1
c5 r2 1
RHS
RHS_V r2 0.01828761203
BOUNDS
FX BOUND c0 0
FX BOUND c2 302.7121865
MI BOUND c5
UP BOUND c5 1000
QUADOBJ
c3 c4 1
ENDATA |
It turns out, I have already reported the same issue ERGO-Code/HiGHS#1727, but I forgot about it! I'll close this one since it is not a bug in SDDP.jl. |
My model will use different random processes under different circumstances (mainly because the probabilities of the random processes are different). Can the random process of my model be expressed as follows (the current model training does not report errors and can run normally, but I Not sure if this is the right way to represent a random process)
|
No. The code you have written is not doing what you think it is doing.
You are comparing a JuMP variable on the left to the integer julia> using JuMP
julia> model = Model();
julia> @variable(model, x);
julia> x == 0
false |
I understand what you mean now.but since the variable Exp_t is related to the previous variable, I tried many methods but failed to solve this problem.
I know that the above code cannot run normally. This is just a rough solution. Do you have any good suggestions? |
Again, the code is not doing what you think it is doing. You must formulate models in SDDP.jl as a valid policy graph. See the first part of https://sddp.dev/stable/tutorial/first_steps/ You cannot change the probability mass of the random variable based on What is the model, in mathematics, that you want to implement? |
Sorry for my ignorance,My mathematical model is: Et is the exp_t above,How should I write the relational expression between Et and St now? |
You cannot model this as a linear policy graph because your stochastic process is not stagewise independent. You should instead model this as a Markovian policy graph, where I have't filled in the Here's a start (I haven't tested this): using SDDP
model = SDDP.MarkovianPolicyGraph(;
transition_matrices = [
ones(1, 1),
[0.1 0.8 0.1],
[1.0 0.0 0.0; 0.1 0.8 0.1; 0.0 0.0 1.0],
[1.0 0.0 0.0; 0.1 0.8 0.1; 0.0 0.0 1.0],
[1.0 0.0 0.0; 0.1 0.8 0.1; 0.0 0.0 1.0],
[1.0 0.0 0.0; 0.1 0.8 0.1; 0.0 0.0 1.0],
[1.0 0.0 0.0; 0.1 0.8 0.1; 0.0 0.0 1.0],
[1.0 0.0 0.0; 0.1 0.8 0.1; 0.0 0.0 1.0],
[1.0 0.0 0.0; 0.1 0.8 0.1; 0.0 0.0 1.0],
[1.0 0.0 0.0; 0.1 0.8 0.1; 0.0 0.0 1.0],
]
sense = :Max,
upper_bound = 10.0,
optimizer = Ipopt.Optimizer,
) do subproblem, node
t, markov_state = node
@variables(subproblem, begin
P >= 0, SDDP.State, (initial_value = 1)
St >= 0, SDDP.State, (initial_value = 7)
s >= 0
end)
@constraints(subproblem, begin
con_st, St.out == St.in
P.in - s == P.out
end)
@stageobjective(subproblem, St.out * s)
if t > 1
Ω, P = [-1, 1], [0.5, 0.5]
SDDP.parameterize(subproblem, Ω, P) do ω
if markov_state == 1 # Exp_t == -1
set_normalized_coefficient(con_st, St.in, ???)
elseif markov_state == 2 # Exp_t == 0
set_normalized_coefficient(con_st, St.in, ???)
else # Exp_t == 1
@assert markov_state == 3
set_normalized_coefficient(con_st, St.in, ???)
end
end
end
return
end |
thank you again,I know the solution, it's just that |
My model example is as follows:
St is the price, and its possibility is a binary tree grid,
St.out=St.in*e^(mu+ω*sigma)
when I train
get error
I've tried a lot and still can't solve this problem
The text was updated successfully, but these errors were encountered: