Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure compatibility with Plasmo 0.5.4 and static partition example #5

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 60 additions & 0 deletions examples/optimal_control_static_partition.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Example demonstrating the use of overlap to solve a long horizon control problem
# Uses a static partitioning of the graph (useful if KaHyPar is not available)
using Plasmo, Ipopt
using SchwarzOpt

T = 200 # number of time points
d = sin.(1:T) # a disturbance vector
imbalance = 0.1 # partition imbalance
distance = 2 # expand distance
n_parts = 10 # number of partitions

# Create the model (an optigraph)
graph = OptiGraph()

@optinode(graph, state[1:T])
@optinode(graph, control[1:(T-1)])

for (i, node) in enumerate(state)
@variable(node, x)
@constraint(node, x >= 0)
@objective(node, Min, x^2) #- 2*x*d[i])
end
for node in control
@variable(node, u)
@constraint(node, u >= -1000)
@objective(node, Min, u^2)
end
n1 = state[1]
@constraint(n1, n1[:x] == 0)

for i in 1:(T-1)
@linkconstraint(graph, state[i][:x] + control[i][:u] + d[i] == state[i+1][:x])
end

hypergraph, hyper_map = hyper_graph(graph) # create hypergraph object based on graph
# Create an equally sized partition based on the number of time points and number of partitions.
N_node = 2 * T - 1
partition_vector = repeat(1:n_parts, inner=N_node ÷ n_parts)
remaining = N_node % n_parts
if remaining > 0
partition_vector = [partition_vector; repeat(1:remaining)]
end

# apply partition to graph
partition = Partition(hypergraph, partition_vector, hyper_map)
apply_partition!(graph, partition)

# calculate subproblems using expansion distance
subs = subgraphs(graph)
expanded_subgraphs = Plasmo.expand.(graph, subs, distance)
sub_optimizer = optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0)

# optimize using schwarz overlapping decomposition
optimizer = SchwarzOpt.optimize!(
graph;
subgraphs=expanded_subgraphs,
sub_optimizer=sub_optimizer,
max_iterations=100,
mu=100.0,
)
8 changes: 4 additions & 4 deletions src/optimizer.jl
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tso-martin: Are the value and dual functions not working as expected when given the graph as an argument in Plasmo v0.5.4? Removing the graph argument should cause issues when running on multiple threads.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get errors for each thread, when value and dual are called with the graph argument. Example:

 nested task error: KeyError: key MOI.VariableIndex(37) not found
    Stacktrace:
      [1] getindex
        @ xxx\.julia\packages\MathOptInterface\yczX1\src\Utilities\CleverDicts.jl:164 [inlined]
      [2] getindex
        @ xxx\.julia\packages\MathOptInterface\yczX1\src\Utilities\copy\index_map.jl:64 [inlined]
      [3] get(node_pointer::Plasmo.NodePointer, attr::MathOptInterface.VariablePrimal, idx::MathOptInterface.VariableIndex)
        @ Plasmo xxx\SchwarzTryout\dev\Plasmo\src\moi_backend_node.jl:287
      [4] value(graph::OptiGraph, var::VariableRef)
        @ Plasmo xxx\SchwarzTryout\dev\Plasmo\src\optigraph.jl:514
      [5] (::SchwarzOpt.var"#25#27"{OptiGraph})(::Pair{Int64, VariableRef})
        @ SchwarzOpt .\none:0
      [6] iterate(g::Base.Generator, s::Vararg{Any})
        @ Base .\generator.jl:47 [inlined]
      [7] _all(f::Base.var"#384#386", itr::Base.Generator{Dict{Int64, VariableRef}, SchwarzOpt.var"#25#27"{OptiGraph}}, ::Colon)
        @ Base .\reduce.jl:1287
      [8] all
        @ .\reduce.jl:1283 [inlined]
      [9] Dict(kv::Base.Generator{Dict{Int64, VariableRef}, SchwarzOpt.var"#25#27"{OptiGraph}})
        @ Base .\dict.jl:111
     [10] _do_iteration(subproblem_graph::OptiGraph)
        @ SchwarzOpt xxx\SchwarzTryout\dev\SchwarzOpt\src\optimizer.jl:305
     [11] macro expansion
        @ xxx\SchwarzTryout\dev\SchwarzOpt\src\optimizer.jl:414 [inlined]
     [12] (::SchwarzOpt.var"#45#threadsfor_fun#32"{SchwarzOpt.var"#45#threadsfor_fun#31#33"{…}})(tid::Int64; onethread::Bool)
        @ SchwarzOpt .\threadingconstructs.jl:214
     [13] #45#threadsfor_fun
        @ SchwarzOpt .\threadingconstructs.jl:181 [inlined]
     [14] (::Base.Threads.var"#1#2"{SchwarzOpt.var"#45#threadsfor_fun#32"{SchwarzOpt.var"#45#threadsfor_fun#31#33"{…}}, Int64})()
        @ Base.Threads .\threadingconstructs.jl:153

Removing the graph argument worked fine in multiple threads

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, that is definitely a bug. Without the graph argument, value grabs the last solution the node was used in. So it's possible it takes the wrong one if there are multiple subgraphs that use it. I will take a look at this and get a fix out. Thanks for checking into this!

Original file line number Diff line number Diff line change
Expand Up @@ -302,8 +302,8 @@ function _do_iteration(subproblem_graph::OptiGraph)
x_sub = subproblem_graph.ext[:x_map] # primal variables to update
l_sub = subproblem_graph.ext[:l_map] # dual variables to update

xk = Dict(key => value(subproblem_graph, val) for (key, val) in x_sub)
lk = Dict(key => dual(subproblem_graph, val) for (key, val) in l_sub)
xk = Dict(key => value(val) for (key, val) in x_sub)
lk = Dict(key => dual(val) for (key, val) in l_sub)

return xk, lk
end
Expand All @@ -327,7 +327,7 @@ function _calculate_primal_feasibility(optimizer)
node = optinode(term)
graph = optimizer.node_subgraph_map[node]
subproblem_graph = optimizer.expanded_subgraph_map[graph]
val += coeff * value(subproblem_graph, term)
val += coeff * value(term)
end
push!(prf, val - linkcon.set.value)
end
Expand Down Expand Up @@ -457,7 +457,7 @@ function optimize!(optimizer::Optimizer)
JuMP.set_start_value.(
Ref(subproblem),
all_variables(subproblem),
value.(Ref(subproblem), all_variables(subproblem)),
value.(all_variables(subproblem)),
)
end
end
Expand Down
Loading