Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OptiNode Parallel Processing #132

Open
SolidAhmad opened this issue Jan 16, 2025 · 3 comments
Open

OptiNode Parallel Processing #132

SolidAhmad opened this issue Jan 16, 2025 · 3 comments

Comments

@SolidAhmad
Copy link

I am exploring Plasmo.jl as a potential tool for tackling a large MILP problem that I've decomposed into 144 OptiNodes. Each OptiNode subproblem is substantial in size, and I’m interested in understanding whether the problem could be solved efficiently by leveraging multiple physical cores (potentially across several compute nodes) to solve each OptiNode using HiGHS.

Does Plasmo.jl currently support such a setup directly? If not, what would be the best approach to implement this workflow? Additionally, does this approach align with the intended design of Plasmo.jl, and are there any nuances or considerations I should be aware of? I’d appreciate any feedback or guidance to ensure I’m on the right track.

@dlcole3
Copy link
Collaborator

dlcole3 commented Jan 16, 2025

Hi @SolidAhmad, good questions. @jalving may also have some feedback, but here are some initial considerations:

  • You can solve individual nodes or subgraphs independently from the rest of the problem (these solutions may not yield the same value as solving the entire graph since each node or subgraph is being solved without knowledge of other node/subgraph objectives). Under version 0.6 and beyond, Plasmo is designed to have subgraphs as the basic optimization object, so if you are trying to optimize subproblems, it is generally better to have the subproblems be subgraphs rather than nodes. You can do this pretty easily by creating a Partition object and partitioning your graph so that each node is on its own subgraph. You can see an example of this here. For a graph, g, you can do something like this:
node_membership_vector = [i for i in 1:length(num_nodes(g))]
partition = Plasmo.Partition(g, node_membership_vector)

apply_partition!(g, partition)

Note that you will have to set the subgraph objectives to use the node objectives by calling set_to_node_objectives(sg) for each subgraph sg in all_subgraphs(g).

  • You can optimize multiple subgraphs in parallel by using multiple threads, such as below:
sgs = local_subgraphs(graph)
Threads.@threads for i in 1:length(sgs)
    optimize!(sgs[i]
end

However, you should know that Plasmo does not yet support distributing memory, so this approach does not yield benefits to memory usage. It can potentially speedup your overall solve time though if you have multiple threads available.

  • We just released a preprint on doing decomposition using Plasmo.jl that you can find here where we introduce a Benders decomposition approach that can be applied to LP and MILP problems. The source code for our package that implements this decomposition approach can be found here.

Does that help?

@SolidAhmad
Copy link
Author

@dlcole3 This is extremely helpful;I appreciate the time you took to address my question comprehensively. As you suggested, I will look at your sources and attempt to formulate my problem in subgraphs rather than nodes. I guess one follow-up question I have is, what do you see as a major limitation in facilitating a distributed memory framework for this approach?

@dlcole3
Copy link
Collaborator

dlcole3 commented Jan 17, 2025

@dlcole3 This is extremely helpful;I appreciate the time you took to address my question comprehensively. As you suggested, I will look at your sources and attempt to formulate my problem in subgraphs rather than nodes. I guess one follow-up question I have is, what do you see as a major limitation in facilitating a distributed memory framework for this approach?

Distributing memory requires use of some other packages that need to be incorporated on top of Plasmo. Ideally, each subgraph in Plasmo could be distributed to a different processor, but this would require some changes to the existing OptiGraph abstraction. I think this would require a nontrivial extension to Plasmo.jl, but I think this is very possible to do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants