Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel model building docs #3894

Closed
joaquimg opened this issue Nov 26, 2024 · 8 comments
Closed

Parallel model building docs #3894

joaquimg opened this issue Nov 26, 2024 · 8 comments

Comments

@joaquimg
Copy link
Member

Users keep asking about it and some are already doing it with @expression and @build_constraint.

We should add comments about it in the parallelism tutorial.

@odow
Copy link
Member

odow commented Nov 26, 2024

We already have
https://jump.dev/JuMP.jl/stable/tutorials/algorithms/parallelism/#When-building-a-JuMP-model

Does building in parallel really work/improve things?

@odow
Copy link
Member

odow commented Nov 26, 2024

Closing in favor of #2348 (comment)

@odow odow closed this as completed Nov 26, 2024
@odow odow reopened this Nov 27, 2024
@odow
Copy link
Member

odow commented Nov 27, 2024

I've just never seen a case where this is actually meaningful.

julia> using JuMP

julia> function build_threaded(N)
           model = Model()
           @variable(model, x[1:N])
           my_lock = Threads.ReentrantLock();
           Threads.@threads for i in 1:N
               con = @build_constraint(sum(x[1:i]) >= 0)
               lock(my_lock) do
                   add_constraint(model, con)
               end
           end
           return model
       end
build_threaded (generic function with 1 method)

julia> function build_serial(N)
           model = Model()
           @variable(model, x[1:N])
           for i in 1:N
               @constraint(model, sum(x[1:i]) >= 0)
           end
           return model
       end
build_serial (generic function with 1 method)

julia> @time build_serial(1_000);
  0.100237 seconds (47.79 k allocations: 108.902 MiB)

julia> @time build_serial(1_000);
  0.085962 seconds (47.79 k allocations: 108.902 MiB)

julia> @time build_threaded(1_000);
  0.067640 seconds (71.92 k allocations: 110.531 MiB, 247.57% compilation time)

julia> @time build_threaded(1_000);
  0.038823 seconds (47.82 k allocations: 108.904 MiB)

julia> @time build_threaded(10_000);
 14.665181 seconds (642.39 k allocations: 10.029 GiB, 42.87% gc time)

julia> @time build_serial(10_000);
 19.576950 seconds (642.36 k allocations: 10.029 GiB, 14.43% gc time)

julia> Threads.nthreads()
5

@jd-lara
Copy link
Contributor

jd-lara commented Nov 28, 2024

We use parallelism to build the expressions used for the network flow constraints. We pre-generate all the expression in parallel doing the vector product of the variables and the column vector of the PTDF to generate the affine expressions and THEN we post the constraints.

That being said, it took a while to get it right.

https://github.com/NREL-Sienna/PowerSimulations.jl/blob/cfc1f52411797528d4d090a866cda3bb5f0cdfb5/src/devices_models/devices/AC_branches.jl#L617

@odow
Copy link
Member

odow commented Nov 28, 2024

Does it make a difference? Do you have benchmarks?

@jd-lara
Copy link
Contributor

jd-lara commented Nov 28, 2024

Does it make a difference? Do you have benchmarks?

It makes a difference for super-large networks and doesn't hurt for smaller ones. I could test again, the serial code is still there for benchmarking and debugging.

@odow
Copy link
Member

odow commented Nov 29, 2024

Okay. If people are going to do this, we should at least give instructions for how to do it safely

@odow
Copy link
Member

odow commented Nov 29, 2024

Closing again in favor of #2348 (comment)

@odow odow closed this as completed Nov 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

3 participants