Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] add Transitioning from MATLAB tutorial #3698

Merged
merged 17 commits into from
Mar 6, 2024
4 changes: 3 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,9 @@ const _PAGES = [
"tutorials/getting_started/debugging.md",
"tutorials/getting_started/design_patterns_for_larger_models.md",
"tutorials/getting_started/performance_tips.md",
"tutorials/getting_started/transitioning_from_matlab.md",
],
"Transitioning" => [
"tutorials/transitioning/transitioning_from_matlab.md",
],
"Linear programs" => [
"tutorials/linear/introduction.md",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -270,13 +270,58 @@ objective_value(model)
# end
# ```

# ## Symmetric and Hermitian matrices

# Julia has specialized support for symmetric and Hermitian matrices in the
# `LinearAlgebra` package:

import LinearAlgebra

# If you have a matrix that is numerically symmetric:

x = [1 2; 2 3]

#-

LinearAlgebra.issymmetric(x)

# then you can wrap it in a `LinearAlgebra.Symmetric` matrix to tell Julia's
# type system that the matrix is symmetric.

LinearAlgebra.Symmetric(x)

# Using a `Symmetric` matrix lets Julia and JuMP use more efficient algorithms
# when they are working with symmetric matrices.

# If you have a matrix that is nearly but not exactly symmetric:

x = [1.0 2.0; 2.001 3.0]
LinearAlgebra.issymmetric(x)

# then you could, as you might do in MATLAB, make it numerically symmetric as
# follows:

x_sym = 0.5 * (x + x')

# In Julia, you can explicitly choose whether to use the lower or upper triangle
# of the matrix:

x_sym = LinearAlgebra.Symmetric(x, :L)

#-

x_sym = LinearAlgebra.Symmetric(x, :U)

# The same applies for Hermitian matrices, using `LinearAlgebra.Hermitian` and
# `LinearAlgebra.ishermitian`.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I wanted to point out here is that with MATLAB you have to be paranoid about making your matrices exactly Hermitian, in order to not get subtle errors, whereas with JuMP you just need to wrap them and you're good to go.

But now I've tested it, and it turns out that it's not true. If you try to constrain a non-Hermitian matrix to be in HermitianPSDCone() you get an error message, as I expected, but if you try to constrain a non-Symmetric matrix to be in PSDCone() JuMP will let you (and fail afterwards, of course). Is that intentional?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For non-symmetric matrices, JuMP adds equality constraints for the off-diagonal elements. It lets you write

model = Model()
@variable(model, X[1:2, 1:2])
@constraint(model, X in PSDCone())

We don't have the same for Hermitian because there is no HermitianPostiveSemidefininteConeSquare (I'm sure we've discussed, but I can't find the link. Edit: found it: #3369 (comment)).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? Is there any case where you would need that? As far as I can see this can either generate less efficient code or give you an error.

Copy link
Member Author

@odow odow Mar 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regardless of the need, it exists in JuMP 1.0 so we are not changing it.

Someone could also write [x y; 1 z] in PSDCone() and we would add a constraint that y == 1. It may have been sensible to support only the triangular form of PSD, but we didn't do that, so it's no longer up for discussion.

We added Hessian support after 1.0, which is why it supports only the triangular form.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was just curious, I wasn't suggesting to change this, it would clearly be a breaking change.

Although now that you brought it up, I will definitely suggest removing it when the time comes for JuMP 2.0.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will definitely suggest removing it when the time comes for JuMP 2.0

My goal is not to release JuMP 2.0 if at all possible. If we did, it would be for a minimal set of breaking changes, not just to remove some things we don't like. The only candidate was the nonlinear changes, but we seem to have managed to sneak them in with minimal disruption. The JuMP developers have no plans for JuMP 2.0.

# ## Primal versus dual form

# When you translate some optimization problems from YALMIP or CVX to JuMP, you
# might be surprised to see it get much faster or much slower, even if you're
# When you translate some optimization problems from YALMIP or CVX to JuMP, you
# might be surprised to see it get much faster or much slower, even if you're
# using exactly the same solver. The most likely reason is that YALMIP will
# always interpret the problem as the dual form, whereas CVX and JuMP will try to
# interpret the problem in the form most appropriate to the solver. If the
# interpret the problem in the form most appropriate to the solver. If the
# problem is more naturally formulated in the primal form it is likely that
# YALMIP's performance will suffer, or if JuMP gets it wrong, its performance will
# suffer. It might be worth trying both primal and dual forms if you're having
Expand Down Expand Up @@ -334,7 +379,7 @@ function robustness_jump(d)
optimize!(model)
if is_solved_and_feasible(model)
WT = dual(PPT)
return value(λ), LinearAlgebra.dot(WT, rhoT)
return value(λ), real(LinearAlgebra.dot(WT, rhoT))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did add real here? It's redundant, dot already returns a real number from Julia 1.11 onwards.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the docs currently build with Julia 1.10 and we support Julia 1.6. Also, v1.11 isn't released yet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the problem? It doesn't cause any error with older versions of Julia, the output is just slightly less neat.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dot currently returns a Complex{Float64} instead of a Float64. Your MATLAB examples return a Float64.

julia> import LinearAlgebra

julia> A = LinearAlgebra.Hermitian(rand(ComplexF64, (3, 3)))
3×3 LinearAlgebra.Hermitian{ComplexF64, Matrix{ComplexF64}}:
 0.468242+0.0im       0.296799+0.49972im   0.621408+0.148878im
 0.296799-0.49972im   0.413853+0.0im       0.677598+0.875476im
 0.621408-0.148878im  0.677598-0.875476im   0.59341+0.0im

julia> LinearAlgebra.dot(A, A)
4.6860981128420285 + 0.0im

julia> real(LinearAlgebra.dot(A, A))
4.6860981128420285

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know. This is not an error, and will change anyway in Julia 1.11.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know

I know because I saw your PR 😄

My point is that I personally would argue that changing from ComplexF64 to Float64 is a breaking change, and the fact it will change in Julia v1.11 is not compelling because we still have many JuMP users on a range of old versions. I want something that works across all versions, not just some upcoming one. We cannot assume that new users from MATLAB will use a recent version of Julia. See, for example, a discourse user on Julia v1.5.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really don't understand the problem. It works. It's just slightly uglier. And moreover it's just example code in a tutorial. Nothing depends on whether its return value is Float64 or ComplexF64.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't work. One return a ComplexF64, and the other returns a Float64.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this a problem?

else
return "Something went wrong: $(raw_status(model))"
end
Expand Down
Loading