Skip to content

(WIP) Test script sIPOPT #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 109 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 6 commits
Commits
Show all changes
109 commits
Select commit Hold shift + click to select a range
4b397a6
working script sipopt
andrewrosemberg Jun 28, 2024
8098c3e
add todo Np
andrewrosemberg Jun 28, 2024
1ac01c4
test dense hessian
andrewrosemberg Jun 28, 2024
ac8a239
add dense jacobian
andrewrosemberg Jun 28, 2024
188ab6f
add parameters to the calculations
andrewrosemberg Jun 28, 2024
2b30d5d
add overload script (wip)
andrewrosemberg Jun 28, 2024
540b792
start nlp change
andrewrosemberg Jul 1, 2024
a9450fa
add full hessian and jacobian
andrewrosemberg Jul 1, 2024
71879ba
first try working code
andrewrosemberg Jul 1, 2024
c175d52
comment
andrewrosemberg Jul 1, 2024
c026c99
update code
andrewrosemberg Jul 1, 2024
d163dfa
add comments
andrewrosemberg Jul 1, 2024
223b4b6
add TODO tests
andrewrosemberg Jul 1, 2024
7f05795
add todo slack variables
andrewrosemberg Jul 1, 2024
7caec85
add slack variables
andrewrosemberg Jul 2, 2024
983c182
add comments
andrewrosemberg Jul 2, 2024
cd6d518
adding tests
andrewrosemberg Jul 2, 2024
97e5379
add hessian and jacobian test
andrewrosemberg Jul 2, 2024
41672ce
start adding tests derivatives
andrewrosemberg Jul 2, 2024
a8bae3a
working derivatives
andrewrosemberg Jul 4, 2024
17462cc
add nominal derivatives test
andrewrosemberg Jul 4, 2024
0aadfe1
intermdetate solution
andrewrosemberg Jul 5, 2024
25dc222
add upperbound
andrewrosemberg Jul 6, 2024
0ef56d8
all tests passing
andrewrosemberg Jul 8, 2024
f00931f
fix type
andrewrosemberg Jul 8, 2024
f593afa
start adding approximation tests
andrewrosemberg Jul 9, 2024
2a97799
fix sign
andrewrosemberg Jul 9, 2024
52f075b
update signs
andrewrosemberg Jul 9, 2024
6b567b5
update solution structure for test
andrewrosemberg Jul 9, 2024
55f9b64
debug tests
andrewrosemberg Jul 11, 2024
9b00e39
update debug
andrewrosemberg Jul 11, 2024
c695c52
fix signs
andrewrosemberg Jul 12, 2024
9b0f72f
fix signs
andrewrosemberg Jul 12, 2024
2f9bc7f
fix typos
andrewrosemberg Jul 12, 2024
ebe789c
adapt signs to obj sense
andrewrosemberg Jul 12, 2024
4762fb0
add comments
andrewrosemberg Jul 12, 2024
18e8bf8
debug fix and relax
andrewrosemberg Jul 12, 2024
02592a9
update tests
andrewrosemberg Jul 15, 2024
763d0c0
use sparse matrix
andrewrosemberg Jul 15, 2024
41b88d9
add tests with variable constrained by equalto
andrewrosemberg Jul 15, 2024
7d91082
allow variable fix
andrewrosemberg Jul 15, 2024
2538e9c
add more tests
andrewrosemberg Jul 15, 2024
a3f8e1b
start test bilevel
andrewrosemberg Jul 15, 2024
eabce33
pass all to sparse
andrewrosemberg Jul 17, 2024
c2d73cd
add bilevel test
andrewrosemberg Jul 17, 2024
dda6469
update fix tolerance test
andrewrosemberg Jul 24, 2024
bb753ff
increase tests
andrewrosemberg Jul 25, 2024
93d3981
more unit-tests
andrewrosemberg Jul 25, 2024
e20f4a8
update slack approximation
andrewrosemberg Jul 25, 2024
7ddb3d2
fix order sign change
andrewrosemberg Jul 29, 2024
7b3e591
update tests
andrewrosemberg Jul 29, 2024
839158d
add singular example and create bilevel file
andrewrosemberg Jul 29, 2024
3fdfc0f
add singular correction
andrewrosemberg Jul 30, 2024
c474dda
add non-linear bilevel test
andrewrosemberg Jul 30, 2024
276b6ef
add strategic bidding
andrewrosemberg Jul 30, 2024
873e080
update test
andrewrosemberg Aug 6, 2024
5032c45
add opf
andrewrosemberg Aug 6, 2024
66dfe60
add ac stregic bidding
andrewrosemberg Aug 6, 2024
3cdc51c
update test
andrewrosemberg Aug 6, 2024
a32fd70
improve opf example
andrewrosemberg Aug 7, 2024
e3d88ab
update restoration
andrewrosemberg Aug 7, 2024
9b27e3b
update script example sb ac
andrewrosemberg Aug 7, 2024
6346d33
update code working
andrewrosemberg Aug 8, 2024
1920286
update example
andrewrosemberg Aug 8, 2024
c74e930
update
andrewrosemberg Aug 15, 2024
f446840
add example
andrewrosemberg Aug 15, 2024
544aa0b
fix help
andrewrosemberg Aug 15, 2024
c3a1e97
update
andrewrosemberg Aug 15, 2024
cff5ad0
add moonlander example
andrewrosemberg Aug 26, 2024
b3f4d94
add moonlander test
andrewrosemberg Aug 26, 2024
9abb194
update tests
andrewrosemberg Sep 4, 2024
533d3a4
start fix sign
andrewrosemberg Sep 5, 2024
28c40b4
update code
andrewrosemberg Sep 5, 2024
478e7db
separate tests
andrewrosemberg Sep 11, 2024
4eebfda
update code
andrewrosemberg Sep 11, 2024
0c0f63b
update debug
andrewrosemberg Sep 12, 2024
a043847
fix bug
andrewrosemberg Sep 12, 2024
95d4c67
Merge pull request #3 from andrewrosemberg/ar/sipop2
andrewrosemberg Sep 12, 2024
a9c3c8d
update bidding
andrewrosemberg Sep 24, 2024
ed20b3c
update comparison
Sep 25, 2024
a5deb41
update results
Sep 27, 2024
a748520
udapte
Oct 8, 2024
1803fbb
fix typo
Oct 9, 2024
7000113
update
Oct 10, 2024
3203209
update
Oct 11, 2024
425cf56
update
Oct 11, 2024
bc87a86
update
Oct 11, 2024
68e7d49
update
Oct 11, 2024
0eba400
try distributed
Oct 11, 2024
5d98548
update
Oct 11, 2024
1661f37
update
Oct 11, 2024
cd90ce4
update
Oct 12, 2024
ccbba5b
update
Oct 12, 2024
11a5055
update
Oct 14, 2024
3309208
update
Oct 14, 2024
84585a5
update
Oct 15, 2024
56655d3
update
Oct 15, 2024
7fd09fd
update
Oct 16, 2024
7ffd2f8
update
Oct 17, 2024
b0aca2f
update
Oct 17, 2024
145f8a9
update new solvers
Oct 24, 2024
a83712b
update final
Oct 24, 2024
fc0a3ab
update
Oct 26, 2024
0cf8892
update
Oct 26, 2024
d082f73
update
Oct 26, 2024
cf07259
update
Oct 27, 2024
68cdf17
update
Oct 28, 2024
e022547
update
Oct 29, 2024
7a343dd
time tables
andrewrosemberg Feb 21, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
282 changes: 282 additions & 0 deletions MOI.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,282 @@
using MathOptInterface
import MathOptInterface: Evaluator, _forward_eval_ϵ
const MOI = MathOptInterface
using MathOptInterface.Nonlinear

mutable struct MOI.Evaluator{B} <: MOI.AbstractMathOptInterface.NLPEvaluator
# The internal datastructure.
model::Model
# The abstract-differentiation backend
backend::B
# ordered_constraints is needed because `OrderedDict` doesn't support
# looking up a key by the linear index.
ordered_constraints::Vector{ConstraintIndex}
# Storage for the NLPBlockDual, so that we can query the dual of individual
# constraints without needing to query the full vector each time.
constraint_dual::Vector{Float64}
# Timers
initialize_timer::Float64
eval_objective_timer::Float64
eval_constraint_timer::Float64
eval_objective_gradient_timer::Float64
eval_constraint_gradient_timer::Float64
eval_constraint_jacobian_timer::Float64
eval_hessian_objective_timer::Float64
eval_hessian_constraint_timer::Float64
eval_hessian_lagrangian_timer::Float64
parameters_as_variables::Bool

function MOI.Evaluator(
model::Model,
backend::B = nothing,
) where {B<:Union{Nothing,MOI.AbstractMathOptInterface.NLPEvaluator}}
return new{B}(
model,
backend,
ConstraintIndex[],
Float64[],
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
false
)
end
end

function Evaluator{B}(
# The internal datastructure.
model::Model,
# The abstract-differentiation backend
backend::B,
# ordered_constraints is needed because `OrderedDict` doesn't support
# looking up a key by the linear index.
ordered_constraints::Vector{ConstraintIndex},
# Storage for the NLPBlockDual, so that we can query the dual of individual
# constraints without needing to query the full vector each time.
constraint_dual::Vector{Float64},
# Timers
initialize_timer::Float64,
eval_objective_timer::Float64,
eval_constraint_timer::Float64,
eval_objective_gradient_timer::Float64,
eval_constraint_gradient_timer::Float64,
eval_constraint_jacobian_timer::Float64,
eval_hessian_objective_timer::Float64,
eval_hessian_constraint_timer::Float64,
eval_hessian_lagrangian_timer::Float64,
) where {B<:Union{Nothing,MOI.AbstractMathOptInterface.NLPEvaluator}}
return Evaluator{B}(
# The internal datastructure.
model::Model,
# The abstract-differentiation backend
backend::B,
# ordered_constraints is needed because `OrderedDict` doesn't support
# looking up a key by the linear index.
ordered_constraints::Vector{ConstraintIndex},
# Storage for the NLPBlockDual, so that we can query the dual of individual
# constraints without needing to query the full vector each time.
constraint_dual::Vector{Float64},
# Timers
initialize_timer::Float64,
eval_objective_timer::Float64,
eval_constraint_timer::Float64,
eval_objective_gradient_timer::Float64,
eval_constraint_gradient_timer::Float64,
eval_constraint_jacobian_timer::Float64,
eval_hessian_objective_timer::Float64,
eval_hessian_constraint_timer::Float64,
eval_hessian_lagrangian_timer::Float64,
false
)
end


function MOI.Nonlinear.ReverseAD._forward_eval_ϵ(
d::MathOptInterface.Nonlinear.NLPEvaluator,
ex::Union{_FunctionStorage,_SubexpressionStorage},
storage_ϵ::AbstractVector{ForwardDiff.Partials{N,T}},
partials_storage_ϵ::AbstractVector{ForwardDiff.Partials{N,T}},
x_values_ϵ,
subexpression_values_ϵ,
user_operators::Nonlinear.OperatorRegistry,
) where {N,T}
@assert length(storage_ϵ) >= length(ex.nodes)
@assert length(partials_storage_ϵ) >= length(ex.nodes)
zero_ϵ = zero(ForwardDiff.Partials{N,T})
# ex.nodes is already in order such that parents always appear before children
# so a backwards pass through ex.nodes is a forward pass through the tree
children_arr = SparseArrays.rowvals(ex.adj)
for k in length(ex.nodes):-1:1
node = ex.nodes[k]
partials_storage_ϵ[k] = zero_ϵ
if node.type == Nonlinear.NODE_VARIABLE
storage_ϵ[k] = x_values_ϵ[node.index]
elseif node.type == Nonlinear.NODE_VALUE
storage_ϵ[k] = zero_ϵ
elseif node.type == Nonlinear.NODE_SUBEXPRESSION
storage_ϵ[k] = subexpression_values_ϵ[node.index]
elseif node.type == Nonlinear.NODE_PARAMETER
storage_ϵ[k] = x_values_ϵ[node.index]
# elseif !(d.parameters_as_variables) && node.type == Nonlinear.NODE_PARAMETER
# storage_ϵ[k] = zero_ϵ
else
@assert node.type != Nonlinear.NODE_MOI_VARIABLE
ϵtmp = zero_ϵ
@inbounds children_idx = SparseArrays.nzrange(ex.adj, k)
for c_idx in children_idx
@inbounds ix = children_arr[c_idx]
@inbounds partial = ex.partials_storage[ix]
@inbounds storage_val = storage_ϵ[ix]
# TODO: This "if" statement can take 8% of the hessian
# evaluation time. Find a more efficient way.
if !isfinite(partial) && storage_val == zero_ϵ
continue
end
ϵtmp += storage_val * ex.partials_storage[ix]
end
storage_ϵ[k] = ϵtmp
if node.type == Nonlinear.NODE_CALL_MULTIVARIATE
# TODO(odow): consider how to refactor this into Nonlinear.
op = node.index
n_children = length(children_idx)
if op == 3 # :*
# Lazy approach for now.
anyzero = false
tmp_prod = one(ForwardDiff.Dual{TAG,T,N})
for c_idx in children_idx
ix = children_arr[c_idx]
sval = ex.forward_storage[ix]
gnum = ForwardDiff.Dual{TAG}(sval, storage_ϵ[ix])
tmp_prod *= gnum
anyzero = ifelse(sval * sval == zero(T), true, anyzero)
end
# By a quirk of floating-point numbers, we can have
# anyzero == true && ForwardDiff.value(tmp_prod) != zero(T)
if anyzero || n_children <= 2
for c_idx in children_idx
prod_others = one(ForwardDiff.Dual{TAG,T,N})
for c_idx2 in children_idx
(c_idx == c_idx2) && continue
ix = children_arr[c_idx2]
gnum = ForwardDiff.Dual{TAG}(
ex.forward_storage[ix],
storage_ϵ[ix],
)
prod_others *= gnum
end
partials_storage_ϵ[children_arr[c_idx]] =
ForwardDiff.partials(prod_others)
end
else
for c_idx in children_idx
ix = children_arr[c_idx]
prod_others =
tmp_prod / ForwardDiff.Dual{TAG}(
ex.forward_storage[ix],
storage_ϵ[ix],
)
partials_storage_ϵ[ix] =
ForwardDiff.partials(prod_others)
end
end
elseif op == 4 # :^
@assert n_children == 2
idx1 = first(children_idx)
idx2 = last(children_idx)
@inbounds ix1 = children_arr[idx1]
@inbounds ix2 = children_arr[idx2]
@inbounds base = ex.forward_storage[ix1]
@inbounds base_ϵ = storage_ϵ[ix1]
@inbounds exponent = ex.forward_storage[ix2]
@inbounds exponent_ϵ = storage_ϵ[ix2]
base_gnum = ForwardDiff.Dual{TAG}(base, base_ϵ)
exponent_gnum = ForwardDiff.Dual{TAG}(exponent, exponent_ϵ)
if exponent == 2
partials_storage_ϵ[ix1] = 2 * base_ϵ
elseif exponent == 1
partials_storage_ϵ[ix1] = zero_ϵ
else
partials_storage_ϵ[ix1] = ForwardDiff.partials(
exponent_gnum * pow(base_gnum, exponent_gnum - 1),
)
end
result_gnum = ForwardDiff.Dual{TAG}(
ex.forward_storage[k],
storage_ϵ[k],
)
# TODO(odow): fix me to use NaNMath.jl instead
log_base_gnum = base_gnum < 0 ? NaN : log(base_gnum)
partials_storage_ϵ[ix2] =
ForwardDiff.partials(result_gnum * log_base_gnum)
elseif op == 5 # :/
@assert n_children == 2
idx1 = first(children_idx)
idx2 = last(children_idx)
@inbounds ix1 = children_arr[idx1]
@inbounds ix2 = children_arr[idx2]
@inbounds numerator = ex.forward_storage[ix1]
@inbounds numerator_ϵ = storage_ϵ[ix1]
@inbounds denominator = ex.forward_storage[ix2]
@inbounds denominator_ϵ = storage_ϵ[ix2]
recip_denominator =
1 / ForwardDiff.Dual{TAG}(denominator, denominator_ϵ)
partials_storage_ϵ[ix1] =
ForwardDiff.partials(recip_denominator)
partials_storage_ϵ[ix2] = ForwardDiff.partials(
-ForwardDiff.Dual{TAG}(numerator, numerator_ϵ) *
recip_denominator *
recip_denominator,
)
elseif op > 6
f_input = _UnsafeVectorView(d.jac_storage, n_children)
for (i, c) in enumerate(children_idx)
f_input[i] = ex.forward_storage[children_arr[c]]
end
H = _UnsafeLowerTriangularMatrixView(
d.user_output_buffer,
n_children,
)
has_hessian = Nonlinear.eval_multivariate_hessian(
user_operators,
user_operators.multivariate_operators[node.index],
H,
f_input,
)
# This might be `false` if we extend this code to all
# multivariate functions.
@assert has_hessian
for col in 1:n_children
dual = zero(ForwardDiff.Partials{N,T})
for row in 1:n_children
# Make sure we get the lower-triangular component.
h = row >= col ? H[row, col] : H[col, row]
# Performance optimization: hessians can be quite
# sparse
if !iszero(h)
i = children_arr[children_idx[row]]
dual += h * storage_ϵ[i]
end
end
i = children_arr[children_idx[col]]
partials_storage_ϵ[i] = dual
end
end
elseif node.type == Nonlinear.NODE_CALL_UNIVARIATE
@inbounds child_idx = children_arr[ex.adj.colptr[k]]
f′′ = Nonlinear.eval_univariate_hessian(
user_operators,
user_operators.univariate_operators[node.index],
ex.forward_storage[child_idx],
)
partials_storage_ϵ[child_idx] = f′′ * storage_ϵ[child_idx]
end
end
end
return storage_ϵ[1]
end
112 changes: 112 additions & 0 deletions testing_barrier.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
################################################
# sIPOPT test
################################################

using JuMP
using SparseArrays
using LinearAlgebra
using Ipopt
# using MadNLP
# using KNITRO

############
# Test Case
############

# Define the problem
model = Model(Ipopt.Optimizer)

# Parameters
@variable(model, p ∈ MOI.Parameter(1.0))
@variable(model, p2 ∈ MOI.Parameter(2.0))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not p == 1.0 instead to use the MOI.EqualTo set?


# Variables
@variable(model, x)
@variable(model, y)

# Constraints
@constraint(model, con1, y >= p*sin(x)) # NLP Constraint
@constraint(model, con2, x + y == p)
@constraint(model, con3, p2 * x >= 0.1)
@objective(model, Min, (1 - x)^2 + 100 * (y - x^2)^2) # NLP Objective
optimize!(model)

# Check local optimality
termination_status(model)

############
# Retrieve important quantities
############

function _dense_hessian(hessian_sparsity, V, n)
I = [i for (i, _) in hessian_sparsity]
J = [j for (_, j) in hessian_sparsity]
raw = SparseArrays.sparse(I, J, V, n, n)
return Matrix(
raw + raw' -
SparseArrays.sparse(LinearAlgebra.diagm(0 => LinearAlgebra.diag(raw))),
)
end

function _dense_jacobian(jacobian_sparsity, V, m, n)
I = [i for (i, j) in jacobian_sparsity]
J = [j for (i, j) in jacobian_sparsity]
raw = SparseArrays.sparse(I, J, V, m, n)
return Matrix(raw)
end

# Primal Solution
primal_values = value.([x, y, p, p2])
dual_values = dual.([con1; con2; con3])
num_vars = length(primal_values)
num_cons = length(dual_values)

# `Evaluator`: Object that helps evaluating functions and calculating related values (Hessian, Jacobian, ...)
evaluator = JuMP.MOI.Nonlinear.Evaluator(model.moi_backend.optimizer.model.nlp_model, JuMP.MOI.Nonlinear.SparseReverseMode(),
[ model.moi_backend.model_to_optimizer_map[index(x)],
model.moi_backend.model_to_optimizer_map[index(y)],
model.moi_backend.model_to_optimizer_map[index(p)],
model.moi_backend.model_to_optimizer_map[index(p2)],
]
)

# Define what we will need to evaluate
MOI.initialize(evaluator, [:Grad, :Jac, :Hess, :JacVec])

# Hessian "Symetric" structure values - Placeholder that will be modified to during the evaluation of the hessian
hessian_sparsity = MOI.hessian_lagrangian_structure(evaluator)
W = fill(NaN, length(hessian_sparsity))

# Modify H with the values for the hessian of the lagrangian
MOI.eval_hessian_lagrangian(evaluator, W, primal_values, 1.0, dual_values)
W = _dense_hessian(hessian_sparsity, W, num_vars)

# Jacobian of the constraints Placeholder
jacobian_sparsity = MOI.jacobian_structure(evaluator)
A = zeros(length(jacobian_sparsity))

# Evaluate Jacobian
MOI.eval_constraint_jacobian(evaluator, A, primal_values)
A = _dense_jacobian(jacobian_sparsity, A, num_cons, num_vars)

# TODO: ∇ₓₚL (Partial second derivative of the lagrangian wrt primal solution and parameters) ;
# TODO: ∇ₚC (partial derivative of the equality constraintswith wrt parameters).

############
# (WORK IN PROGRESS) - Non working code

# Calculate Sensitivity
############
V = diagm(dual_values)
X = diagm(primal_values)

M = [
[W A -I];
[A' 0 0];
[V 0 X]
]

N = [∇ₓₚL ; ∇ₚC]

# sesitivity of the solution (primal-dual) with respect to the parameter
∂s = inv(M) * N