Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from MadNLP:master #47

Open
wants to merge 27 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
7d0369a
[MOI] Do not overwrite options in wrapper (#367)
frapac Sep 5, 2024
b5de851
[FIX] fix missing check in get_status_output (#369)
frapac Sep 5, 2024
7e01b1f
CI path update (#370)
sshin23 Nov 6, 2024
efa792c
Make library relocatable (#362)
tmmsartor Nov 6, 2024
a1dc3a6
[DOC] Add tutorial on warm-start (#372)
frapac Nov 12, 2024
4cbd998
[KKT] Add K2.5 formulation for augmented KKT system (#352)
frapac Nov 12, 2024
fb02d29
[MOI] update signatures of callbacks in MOI wrapper (#351)
frapac Nov 12, 2024
4d38a38
Discontinue support for MadNLPGraph (#374)
sshin23 Nov 12, 2024
d8778e4
[DOC] Add tutorial on multi-precision (#371)
frapac Nov 12, 2024
5df1a02
fix bug in MOI impl when quadratic term has param (#379)
guimarqu Nov 15, 2024
6d3b93c
Update the link for libHSL (#383)
amontoison Dec 10, 2024
753371d
[documentation] Replace the old link of Julia-HSL (#384)
amontoison Dec 10, 2024
e3cf20d
CompatHelper: bump compat for CUDSS to 0.4 for package MadNLPGPU, (ke…
Dec 13, 2024
fb6644e
[MadNLPGPU] Release 0.7.4 (#386)
amontoison Dec 18, 2024
cd87164
[MadNLPTests] Release 0.5.2
amontoison Jan 1, 2025
acae3da
Fix warning related to memory in MadNLPGPU (#392)
amontoison Jan 4, 2025
5575b37
Remove some deps in the Project.toml of MadNLPGPU (#391)
amontoison Jan 4, 2025
b748bed
[MadNLPGPU] Release 0.7.5
amontoison Jan 4, 2025
9917706
tuning richardson tol (#393)
sshin23 Jan 8, 2025
f794d28
fix MadNLPGPU on CUDA.jl 5.6.0
frapac Jan 8, 2025
ada6b1b
[MadNLPGPU] Release 0.7.6
amontoison Jan 8, 2025
d6d30ec
[CI] Test MadNLPGPU on Julia 1.11
amontoison Jan 8, 2025
e310cf6
Release 0.8.5 (#397)
amontoison Jan 10, 2025
3e522e3
[MadNLPGPU] Remove the warning related to CUSOLVER (#401)
amontoison Jan 20, 2025
0ed8e8b
[MadNLPHSL] Add a self-hosted build (#402)
amontoison Jan 22, 2025
230a047
[MadNLP] Fix the warning with SolverCore.solve!
amontoison Jan 23, 2025
12838df
Upgrade MadNLPHSL.jl (#376)
amontoison Jan 23, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/CompatHelper.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
COMPATHELPER_PRIV: ${{ secrets.COMPATHELPER_PRIV }} # optional
run: julia -e 'using CompatHelper; CompatHelper.main(; subdirs = ["","lib/MadNLPGPU","lib/MadNLPGraph","lib/MadNLPHSL","lib/MadNLPKrylov","lib/MadNLPMumps","lib/MadNLPPardiso","lib/MadNLPTests"])'
run: julia -e 'using CompatHelper; CompatHelper.main(; subdirs = ["","lib/MadNLPGPU","lib/MadNLPHSL","lib/MadNLPKrylov","lib/MadNLPMumps","lib/MadNLPPardiso","lib/MadNLPTests"])'
46 changes: 38 additions & 8 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,12 @@ jobs:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@latest
with:
version: ${{ matrix.julia-version }}
version: ${{ matrix.julia-version }}
- uses: julia-actions/cache@v1
- run: julia --color=yes --project=.ci .ci/ci.jl basic
test-moonshot:
test-mit:
env:
JULIA_DEPOT_PATH: /home/sshin/action-runners/MadNLP/julia-depot/
JULIA_DEPOT_PATH: /home/sushin/actions-runner/madnlp-depot/
runs-on: self-hosted
strategy:
matrix:
Expand All @@ -42,22 +42,52 @@ jobs:
- run: julia --color=yes --project=.ci .ci/ci.jl full
- uses: julia-actions/julia-processcoverage@v1
with:
directories: src,lib/MadNLPHSL/src,lib/MadNLPPardiso/src,lib/MadNLPMumps/src,lib/MadNLPKrylov/src,lib/MadNLPGraph
- uses: codecov/codecov-action@v1
directories: src,lib/MadNLPHSL/src,lib/MadNLPPardiso/src,lib/MadNLPMumps/src,lib/MadNLPKrylov/src
- uses: codecov/codecov-action@v5
with:
file: lcov.info
test-moonshot-cuda:
test-cuda:
env:
CUDA_VISIBLE_DEVICES: 1
JULIA_DEPOT_PATH: /home/sshin/action-runners/MadNLP/julia-depot/
JULIA_DEPOT_PATH: /home/sushin/actions-runner/madnlp-depot/
runs-on: self-hosted
strategy:
matrix:
julia-version: ['1.10']
julia-version: ['1.10', '1.11']
steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@latest
with:
version: ${{ matrix.julia-version }}
- uses: julia-actions/cache@v1
- run: julia --color=yes --project=.ci .ci/ci.jl cuda
test-madnlphsl:
env:
CUDA_VISIBLE_DEVICES: 1
JULIA_DEPOT_PATH: /scratch/github-actions/julia_depot_madnlphsl/
runs-on: self-hosted
strategy:
matrix:
julia-version: ['1.10']
hsl-version: ['2024.11.28'] # '2025.1.19'
steps:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@latest
with:
version: ${{ matrix.julia-version }}
- uses: julia-actions/julia-buildpkg@latest
- name: Set HSL_VERSION as environment variable
run: echo "HSL_VERSION=${{ matrix.hsl-version }}" >> $GITHUB_ENV
- name: Install HSL_jll.jl
shell: julia --color=yes {0}
run: |
using Pkg
Pkg.activate("./lib/MadNLPHSL")
path_HSL_jll = "/scratch/github-actions/actions_runner_madnlphsl/HSL_jll.jl.v" * ENV["HSL_VERSION"]
Pkg.develop(path=path_HSL_jll)
- name: Test MadNLPHSL.jl
shell: julia --color=yes {0}
run: |
using Pkg
Pkg.develop(path="./lib/MadNLPHSL")
Pkg.test("MadNLPHSL")
4 changes: 2 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name = "MadNLP"
uuid = "2621e9c9-9eb4-46b1-8089-e8c72242dfb6"
version = "0.8.4"
version = "0.8.5"

[deps]
LDLFactorizations = "40e66cde-538c-5869-a4ad-c39174c6795b"
Expand All @@ -20,12 +20,12 @@ MathOptInterface = "b8f27783-ece8-5eb3-8dc8-9495eed66fee"
MadNLPMOI = "MathOptInterface"

[compat]
LDLFactorizations = "0.10"
MINLPTests = "~0.5"
MadNLPTests = "0.5"
MathOptInterface = "1"
NLPModels = "~0.17.2, 0.18, 0.19, 0.20, 0.21"
SolverCore = "~0.3"
LDLFactorizations = "0.10"
julia = "1.9"

[extras]
Expand Down
2 changes: 2 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
[deps]
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
ExaModels = "1037b233-b668-4ce9-9b63-f9f681f55dd2"
JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
MadNLP = "2621e9c9-9eb4-46b1-8089-e8c72242dfb6"
MadNLPTests = "b52a2a03-04ab-4a5f-9698-6a2deff93217"
NLPModels = "a4795742-8479-5a88-8948-cc11e1c8c1a6"
Quadmath = "be4d8f0f-7fa4-5f49-b795-2f01399ab2dd"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
4 changes: 4 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,10 @@ makedocs(
"Installation" => "installation.md",
"Quickstart" => "quickstart.md",
"Options" => "options.md",
"Tutorials" => [
"Multi-precision" => "tutorials/multiprecision.md",
"Warm-start" => "tutorials/warmstart.md",
],
"Manual" => [
"IPM solver" => "man/solver.md",
"KKT systems" => "man/kkt.md",
Expand Down
3 changes: 2 additions & 1 deletion docs/src/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ In addition to Lapack and Umfpack, the user can install the following extensions
use a specialized linear solver.

## HSL linear solver
Obtain a license and download HSL_jll.jl from https://licences.stfc.ac.uk/product/julia-hsl. There are two versions available: LBT and OpenBLAS. LBT is the recommended option for Julia >= v1.9. Install this download into your current environment using:
Obtain a license and download HSL_jll.jl from https://licences.stfc.ac.uk/products/Software/HSL/LibHSL.
Install this download into your current environment using:
```julia
import Pkg
Pkg.develop(path = "/full/path/to/HSL_jll.jl")
Expand Down
92 changes: 92 additions & 0 deletions docs/src/tutorials/hs15.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
using NLPModels

struct HS15Model{T} <: NLPModels.AbstractNLPModel{T,Vector{T}}
meta::NLPModels.NLPModelMeta{T, Vector{T}}
params::Vector{T}
counters::NLPModels.Counters
end

function HS15Model(;T = Float64, x0=zeros(T,2), y0=zeros(T,2))
return HS15Model(
NLPModels.NLPModelMeta(
2, #nvar
ncon = 2,
nnzj = 4,
nnzh = 3,
x0 = x0,
y0 = y0,
lvar = T[-Inf, -Inf],
uvar = T[0.5, Inf],
lcon = T[1.0, 0.0],
ucon = T[Inf, Inf],
minimize = true
),
T[100, 1],
NLPModels.Counters()
)
end

function NLPModels.obj(nlp::HS15Model, x::AbstractVector)
p1, p2 = nlp.params
return p1 * (x[2] - x[1]^2)^2 + (p2 - x[1])^2
end

function NLPModels.grad!(nlp::HS15Model{T}, x::AbstractVector, g::AbstractVector) where T
p1, p2 = nlp.params
z = x[2] - x[1]^2
g[1] = -T(4) * p1 * z * x[1] - T(2) * (p2 - x[1])
g[2] = T(2) * p1 * z
return g
end

function NLPModels.cons!(nlp::HS15Model, x::AbstractVector, c::AbstractVector)
c[1] = x[1] * x[2]
c[2] = x[1] + x[2]^2
return c
end

function NLPModels.jac_structure!(nlp::HS15Model, I::AbstractVector{T}, J::AbstractVector{T}) where T
copyto!(I, [1, 1, 2, 2])
copyto!(J, [1, 2, 1, 2])
return I, J
end

function NLPModels.jac_coord!(nlp::HS15Model{T}, x::AbstractVector, J::AbstractVector) where T
J[1] = x[2] # (1, 1)
J[2] = x[1] # (1, 2)
J[3] = T(1) # (2, 1)
J[4] = T(2)*x[2] # (2, 2)
return J
end

function NLPModels.jprod!(nlp::HS15Model{T}, x::AbstractVector, v::AbstractVector, jv::AbstractVector) where T
jv[1] = x[2] * v[1] + x[1] * v[2]
jv[2] = v[1] + T(2) * x[2] * v[2]
return jv
end

function NLPModels.jtprod!(nlp::HS15Model{T}, x::AbstractVector, v::AbstractVector, jv::AbstractVector) where T
jv[1] = x[2] * v[1] + v[2]
jv[2] = x[1] * v[1] + T(2) * x[2] * v[2]
return jv
end

function NLPModels.hess_structure!(nlp::HS15Model, I::AbstractVector{T}, J::AbstractVector{T}) where T
copyto!(I, [1, 2, 2])
copyto!(J, [1, 1, 2])
return I, J
end

function NLPModels.hess_coord!(nlp::HS15Model{T}, x, y, H::AbstractVector; obj_weight=T(1)) where T
p1, p2 = nlp.params
# Objective
H[1] = obj_weight * (-T(4) * p1 * x[2] + T(12) * p1 * x[1]^2 + T(2))
H[2] = obj_weight * (-T(4) * p1 * x[1])
H[3] = obj_weight * T(2) * p1
# First constraint
H[2] += y[1] * T(1)
# Second constraint
H[3] += y[2] * T(2)
return H
end

154 changes: 154 additions & 0 deletions docs/src/tutorials/multiprecision.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
# Running MadNLP in arbitrary precision

```@meta
CurrentModule = MadNLP
```
```@setup multiprecision
using NLPModels
using MadNLP

```

MadNLP is written in pure Julia, and as such support solving
optimization problems in arbitrary precision.
By default, MadNLP adapts its precision according to the `NLPModel`
passed in input. Most models use `Float64` (in fact, almost
all optimization modelers are implemented using double
precision), but for certain applications it can be useful to use
arbitrary precision to get more accurate solution.

!!! info
There exists different packages to instantiate a optimization
model in arbitrary precision in Julia. Most of them
leverage the flexibility offered by [NLPModels.jl](https://github.com/JuliaSmoothOptimizers/NLPModels.jl).
In particular, we recommend:
- [CUTEst.jl](https://github.com/JuliaSmoothOptimizers/CUTEst.jl/): supports `Float32`, `Float64` and `Float128`.
- [ExaModels](https://github.com/exanauts/ExaModels.jl): supports `AbstractFloat`.

## Defining a problem in arbitrary precision

As a demonstration, we implement the model [airport](https://vanderbei.princeton.edu/ampl/nlmodels/cute/airport.mod)
from CUTEst using ExaModels. The code writes:
```@example multiprecision
using ExaModels

function airport_model(T)
N = 42
# Data
r = T[0.09 , 0.3, 0.09, 0.45, 0.5, 0.04, 0.1, 0.02, 0.02, 0.07, 0.4, 0.045, 0.05, 0.056, 0.36, 0.08, 0.07, 0.36, 0.67, 0.38, 0.37, 0.05, 0.4, 0.66, 0.05, 0.07, 0.08, 0.3, 0.31, 0.49, 0.09, 0.46, 0.12, 0.07, 0.07, 0.09, 0.05, 0.13, 0.16, 0.46, 0.25, 0.1]
cx = T[-6.3, -7.8, -9.0, -7.2, -5.7, -1.9, -3.5, -0.5, 1.4, 4.0, 2.1, 5.5, 5.7, 5.7, 3.8, 5.3, 4.7, 3.3, 0.0, -1.0, -0.4, 4.2, 3.2, 1.7, 3.3, 2.0, 0.7, 0.1, -0.1, -3.5, -4.0, -2.7, -0.5, -2.9, -1.2, -0.4, -0.1, -1.0, -1.7, -2.1, -1.8, 0.0]
cy = T[8.0, 5.1, 2.0, 2.6, 5.5, 7.1, 5.9, 6.6, 6.1, 5.6, 4.9, 4.7, 4.3, 3.6, 4.1, 3.0, 2.4, 3.0, 4.7, 3.4, 2.3, 1.5, 0.5, -1.7, -2.0, -3.1, -3.5, -2.4, -1.3, 0.0, -1.7, -2.1, -0.4, -2.9, -3.4, -4.3, -5.2, -6.5, -7.5, -6.4, -5.1, 0.0]
# Wrap all data in a single iterator for ExaModels
data = [(i, cx[i], cy[i], r[i]) for i in 1:N]
IJ = [(i, j) for i in 1:N-1 for j in i+1:N]
# Write model using ExaModels
core = ExaModels.ExaCore(T)
x = ExaModels.variable(core, 1:N, lvar = -10.0, uvar=10.0)
y = ExaModels.variable(core, 1:N, lvar = -10.0, uvar=10.0)
ExaModels.objective(
core,
((x[i] - x[j])^2 + (y[i] - y[j])^2) for (i, j) in IJ
)
ExaModels.constraint(core, (x[i]-dcx)^2 + (y[i] - dcy)^2 - dr for (i, dcx, dcy, dr) in data; lcon=-Inf)
return ExaModels.ExaModel(core)
end
```

The function `airport_model` takes as input the type used to define the model in ExaModels.
For example, `ExaCore(Float64)` instantiates a model with `Float64`, whereas `ExaCore(Float32)`
instantiates a model using `Float32`. Thus, instantiating the instance `airport` using `Float32`
simply amounts to
```@example multiprecision
nlp = airport_model(Float32)

```
We verify that the model is correctly instantiated using `Float32`:
```@example multiprecision
x0 = NLPModels.get_x0(nlp)
println(typeof(x0))
```

## Solving a problem in Float32
Now that we have defined our model in `Float32`, we solve
it using MadNLP. As `nlp` is using `Float32`, MadNLP will automatically adjust
its internal types to `Float32` during the instantiation.
By default, the convergence tolerance is also adjusted to the input type, such that `tol = sqrt(eps(T))`.
Hence, in our case the tolerance is set automatically to
```@example multiprecision
tol = sqrt(eps(Float32))
```
We solve the problem using Lapack as linear solver:
```@example multiprecision
results = madnlp(nlp; linear_solver=LapackCPUSolver)
```

!!! note
Note that the distribution of Lapack shipped with Julia supports
`Float32`, so here we do not have to worry whether the
type is supported by the linear solver. Almost all linear solvers shipped
with MadNLP supports `Float32`.

The final solution is
```@example multiprecision
results.solution

```
and the objective is
```@example multiprecision
results.objective

```

For completeness, we compare with the solution returned when we solve the
same problem using `Float64`:
```@example multiprecision
nlp_64 = airport_model(Float64)
results_64 = madnlp(nlp_64; linear_solver=LapackCPUSolver)
```
The final objective is now
```@example multiprecision
results_64.objective

```
As expected when solving an optimization problem with `Float32`,
the relative difference between the two solutions is far from being negligible:
```@example multiprecision
rel_diff = abs(results.objective - results_64.objective) / results_64.objective
```

## Solving a problem in Float128
Now, we go in the opposite direction and solve a problem using `Float128`
to get a better accuracy. We start by loading the library `Quadmath` to
work with quadruple precision:
```@example multiprecision
using Quadmath
```
We can instantiate our problem using `Float128` directly as:
```@example multiprecision
nlp_128 = airport_model(Float128)
```


!!! warning
Unfortunately, a few linear solvers support `Float128` out of the box.
Currently, the only solver suporting quadruple in MadNLP is `LDLSolver`, which implements
[an LDL factorization in pure Julia](https://github.com/JuliaSmoothOptimizers/LDLFactorizations.jl).
The solver `LDLSolver` is not adapted to solve large-scale nonconvex nonlinear programs,
but works if the problem is small enough (as it is the case here).

Replacing the solver by `LDLSolver`, solving the problem with MadNLP just amounts to
```@example multiprecision
results_128 = madnlp(nlp_128; linear_solver=LDLSolver)

```
Note that the final tolerance is much lower than before.
We get the solution in quadruple precision
```@example multiprecision
results_128.solution
```
as well as the final objective:
```@example multiprecision
results_128.objective
```


Loading