5
5
\preamble{Tangi Migot}
6
6
7
7
8
- [ ![ NLPModels 0.20.0 ] ( https://img.shields.io/badge/NLPModels-0.20.0 -8b0000?style=flat-square&labelColor=cb3c33 )] ( https://juliasmoothoptimizers.github.io /NLPModels.jl/stable/ )
9
- [ ![ NLPModelsJuMP 0.12.1 ] ( https://img.shields.io/badge/NLPModelsJuMP-0.12.1 -8b0000?style=flat-square&labelColor=cb3c33 )] ( https://juliasmoothoptimizers.github.io /NLPModelsJuMP.jl/stable/ )
10
- [ ![ ADNLPModels 0.7.0 ] ( https://img.shields.io/badge/ADNLPModels-0.7.0 -8b0000?style=flat-square&labelColor=cb3c33 )] ( https://juliasmoothoptimizers.github.io /ADNLPModels.jl/stable/ )
11
- ![ JuMP 1.12.0 ] ( https://img.shields.io/badge/JuMP-1.12.0 -000?style=flat-square&labelColor=999 )
12
- [ ![ OptimizationProblems 0.7.1 ] ( https://img.shields.io/badge/OptimizationProblems-0.7.1 -8b0000?style=flat-square&labelColor=cb3c33 )] ( https://juliasmoothoptimizers.github.io /OptimizationProblems.jl/stable/ )
8
+ [ ![ NLPModels 0.21.3 ] ( https://img.shields.io/badge/NLPModels-0.21.3 -8b0000?style=flat-square&labelColor=cb3c33 )] ( https://jso.dev /NLPModels.jl/stable/ )
9
+ [ ![ NLPModelsJuMP 0.13.2 ] ( https://img.shields.io/badge/NLPModelsJuMP-0.13.2 -8b0000?style=flat-square&labelColor=cb3c33 )] ( https://jso.dev /NLPModelsJuMP.jl/stable/ )
10
+ [ ![ ADNLPModels 0.8.7 ] ( https://img.shields.io/badge/ADNLPModels-0.8.7 -8b0000?style=flat-square&labelColor=cb3c33 )] ( https://jso.dev /ADNLPModels.jl/stable/ )
11
+ ![ JuMP 1.23.2 ] ( https://img.shields.io/badge/JuMP-1.23.2 -000?style=flat-square&labelColor=999 )
12
+ [ ![ OptimizationProblems 0.9.0 ] ( https://img.shields.io/badge/OptimizationProblems-0.9.0 -8b0000?style=flat-square&labelColor=cb3c33 )] ( https://jso.dev /OptimizationProblems.jl/stable/ )
13
13
14
14
15
15
@@ -26,7 +26,7 @@ length(problems)
26
26
```
27
27
28
28
``` plaintext
29
- 288
29
+ 372
30
30
```
31
31
32
32
@@ -39,14 +39,14 @@ jump_model = OptimizationProblems.PureJuMP.zangwil3()
39
39
40
40
``` plaintext
41
41
A JuMP Model
42
- Minimization problem with:
43
- Variables: 3
44
- Objective function type: Nonlinear
45
- `JuMP.AffExpr`-in-`MathOptInterface.EqualTo{Float64}` : 3 constraints
46
- Model mode: AUTOMATIC
47
- CachingOptimizer state: NO_OPTIMIZER
48
- Solver name: No optimizer attached.
49
- Names registered in the model: constr1, constr2, constr3, x
42
+ ├ solver: none
43
+ ├ objective_sense: MIN_SENSE
44
+ │ └ objective_function_type: JuMP.AffExpr
45
+ ├ num_variables : 3
46
+ ├ num_constraints: 3
47
+ │ └ JuMP.AffExpr in MOI.EqualTo{Float64}: 3
48
+ └ Names registered in the model
49
+ └ : constr1, : constr2, : constr3, : x
50
50
```
51
51
52
52
@@ -59,7 +59,7 @@ length(var_problems)
59
59
```
60
60
61
61
``` plaintext
62
- 94
62
+ 95
63
63
```
64
64
65
65
@@ -72,13 +72,13 @@ jump_model_12 = OptimizationProblems.PureJuMP.woods(n=12)
72
72
73
73
``` plaintext
74
74
A JuMP Model
75
- Minimization problem with:
76
- Variables: 12
77
- Objective function type: Nonlinear
78
- Model mode: AUTOMATIC
79
- CachingOptimizer state: NO_OPTIMIZER
80
- Solver name: No optimizer attached.
81
- Names registered in the model: x
75
+ ├ solver: none
76
+ ├ objective_sense: MIN_SENSE
77
+ │ └ objective_function_type: JuMP.NonlinearExpr
78
+ ├ num_variables: 12
79
+ ├ num_constraints: 0
80
+ └ Names registered in the model
81
+ └ : x
82
82
```
83
83
84
84
@@ -89,13 +89,13 @@ jump_model_120 = OptimizationProblems.PureJuMP.woods(n=120)
89
89
90
90
``` plaintext
91
91
A JuMP Model
92
- Minimization problem with:
93
- Variables: 120
94
- Objective function type: Nonlinear
95
- Model mode: AUTOMATIC
96
- CachingOptimizer state: NO_OPTIMIZER
97
- Solver name: No optimizer attached.
98
- Names registered in the model: x
92
+ ├ solver: none
93
+ ├ objective_sense: MIN_SENSE
94
+ │ └ objective_function_type: JuMP.NonlinearExpr
95
+ ├ num_variables: 120
96
+ ├ num_constraints: 0
97
+ └ Names registered in the model
98
+ └ : x
99
99
```
100
100
101
101
@@ -134,7 +134,7 @@ length(problems)
134
134
```
135
135
136
136
``` plaintext
137
- 288
137
+ 372
138
138
```
139
139
140
140
@@ -151,8 +151,8 @@ ADNLPModel - Model with automatic differentiation backend ADModelBackend{
151
151
ForwardDiffADHvprod,
152
152
ForwardDiffADJprod,
153
153
ForwardDiffADJtprod,
154
- ForwardDiffADJacobian ,
155
- ForwardDiffADHessian ,
154
+ SparseADJacobian ,
155
+ SparseADHessian ,
156
156
ForwardDiffADGHjvprod,
157
157
}
158
158
Problem name: zangwil3
@@ -163,7 +163,7 @@ ADNLPModel - Model with automatic differentiation backend ADModelBackend{
163
163
low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
164
164
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ████████████████████ 3
165
165
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
166
- nnzh: ( 0 .00% sparsity) 6 linear: ████████████████████ 3
166
+ nnzh: (100 .00% sparsity) 0 linear: ████████████████████ 3
167
167
nonlinear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
168
168
nnzj: ( 0.00% sparsity) 9
169
169
@@ -192,7 +192,7 @@ ADNLPModel - Model with automatic differentiation backend ADModelBackend{
192
192
EmptyADbackend,
193
193
EmptyADbackend,
194
194
EmptyADbackend,
195
- ForwardDiffADHessian ,
195
+ SparseADHessian ,
196
196
EmptyADbackend,
197
197
}
198
198
Problem name: woods
@@ -203,7 +203,7 @@ ADNLPModel - Model with automatic differentiation backend ADModelBackend{
203
203
low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
204
204
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
205
205
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
206
- nnzh: ( 0.00 % sparsity) 78 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
206
+ nnzh: ( 73.08 % sparsity) 21 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
207
207
nonlinear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
208
208
nnzj: (------% sparsity)
209
209
@@ -230,7 +230,7 @@ ADNLPModel - Model with automatic differentiation backend ADModelBackend{
230
230
EmptyADbackend,
231
231
EmptyADbackend,
232
232
EmptyADbackend,
233
- ForwardDiffADHessian ,
233
+ SparseADHessian ,
234
234
EmptyADbackend,
235
235
}
236
236
Problem name: woods
@@ -241,7 +241,7 @@ ADNLPModel - Model with automatic differentiation backend ADModelBackend{
241
241
low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
242
242
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
243
243
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
244
- nnzh: ( 0.00 % sparsity) 7260 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
244
+ nnzh: ( 97.11 % sparsity) 210 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
245
245
nonlinear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
246
246
nnzj: (------% sparsity)
247
247
@@ -258,9 +258,9 @@ ADNLPModel - Model with automatic differentiation backend ADModelBackend{
258
258
259
259
260
260
261
- One of the advantages of these problems is that they are type-stable. Indeed, one can specify the output type with the keyword ` type ` as follows.
261
+ One of the advantages of these problems is that they are type-stable. Indeed, one can specify the output type with the keyword ` type ` as follows. Note that in version < 0.8 the argument was ` type=Val(DataType) ` .
262
262
``` julia
263
- nlp16_12 = OptimizationProblems. ADNLPProblems. woods (n= 12 , type= Val ( Float16) )
263
+ nlp16_12 = OptimizationProblems. ADNLPProblems. woods (n= 12 , type= Float16)
264
264
```
265
265
266
266
``` plaintext
@@ -270,7 +270,7 @@ ADNLPModel - Model with automatic differentiation backend ADModelBackend{
270
270
EmptyADbackend,
271
271
EmptyADbackend,
272
272
EmptyADbackend,
273
- ForwardDiffADHessian ,
273
+ SparseADHessian ,
274
274
EmptyADbackend,
275
275
}
276
276
Problem name: woods
@@ -281,7 +281,7 @@ ADNLPModel - Model with automatic differentiation backend ADModelBackend{
281
281
low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
282
282
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
283
283
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
284
- nnzh: ( 0.00 % sparsity) 78 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
284
+ nnzh: ( 73.08 % sparsity) 21 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
285
285
nonlinear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
286
286
nnzj: (------% sparsity)
287
287
0 commit comments