jump

Roots and Solving | Compact Optimization | Linear and Squared Optimization |
---|

- snippet.juliarepl
julia> pkgchk

**(****[****"**julia**"**=> v**"**1.0.2**"****,****"**JuMP**"**=> v**"**0.18.0**"****,****"**Clp**"**=> v**"**0.3.1**"****,****"**DataStructures**"**=> v**"**0.7.4**"****,****"**Ipopt**"**=> v**"**0.3.0**"****]****)**

JuMP 0.18 under Julia 0.6.2 is best suited to solving linear and quadratic problems, subjects to linear constraints.

In principle, it can also do general non-linear problems, too. However, JuMP (1) it seems to be too much tool for the job; (2) suffers from changing interfaces; and (3) just seems unreliable and not yet ready. We recommend IntervalOptimisation.jl, Optim.jl, or NLopt.jl for unconstrained non-linear optimization, instead.

This is the introductory problem in the documentation.

- snippet.juliarepl
julia> using JuMP ## the frontend which interprets user

**'**s commands julia> using Clp ## the backend solver julia> mymodel = Model**(**solver = ClpSolver**(****)****)**Feasibility problem with: * 0 linear constraints * 0 variables Solver is ClpMathProg julia> @variable**(**mymodel**,**0 <= x <= 2**)**## first variable**,**with box constraint x julia> @variable**(**mymodel**,**0 <= y <= 30**)**## second variable**,**with box constraint y julia> @constraint**(**mymodel**,**1x + 5y <= 3.0**)**## a non-box constraint x + 5 y ≤ 3 julia> @objective**(**mymodel**,**Max**,**5*x + 3*y**)**## the objective function 5 x + 3 y julia> print**(**mymodel**)**## nice summary Max 5 x + 3 y Subject to x + 5 y ≤ 3 0 ≤ x ≤ 2 0 ≤ y ≤ 30 julia> status = solve**(**mymodel**)**## now find the solution :Optimal julia>**(**getobjectivevalue**(**mymodel**)****,**getvalue**(**x**)****,**getvalue**(**y**)****)****(**10.6**,**2.0**,**0.2**)**

- Array optimizing variables are more flexible than suggested here, e.g.,
`@variable(m, x[1:M,1:N] >= 0 )`

,`@variable(m, x[i=1:10,j=1:10; isodd(i+j)] >= 0)`

, or`@variable(m, x[i=1:10], start=(i/2))`

.

- snippet.julia
using JuMP

**,**Clp**,**DataStructures mymodel = Model**(**solver = ClpSolver**(****)****)**; @variable**(**mymodel**,**0 <= x <= 2**)**; @variable**(**mymodel**,**0 <= y <= 30**)**; @constraint**(**mymodel**,**1x + 5y <= 3.0**)**; ## cone constraints: @constraint**(**m**,**norm**(**A*x**)**<= 2w - 1**)**@objective**(**mymodel**,**Max**,**5*x + 3*y**)**; status = solve**(**mymodel**)**; ## solution could be optimal**,**unbounded**,**infeasible**,**userlimit**,**error**,**or notsolved function describeModel**(**mymodel**)**println**(**mymodel**)**jumpfuns= OrderedDict**{**Function**,**String**}****(**MathProgBase.numvar =>**"**number of variables associated with the Model m**"****,**MathProgBase.numvar =>**"**number of variables**"****,**MathProgBase.numlinconstr =>**"**number of linear constraints**"****,**MathProgBase.numquadconstr =>**"**number of quadratic constraints**"****,**MathProgBase.numconstr =>**"**total number of constraints**"****,**JuMP.numsocconstr =>**"**number of second order cone constraints**"****,**JuMP.numsosconstr =>**"**number of sos constraints**"****,**JuMP.numsdconstr =>**"**number of semi-definite constraints**"****,**JuMP.numnlconstr =>**"**number of nonlinear constraints**"****,**getsolvetime =>**"**solve time reported by the solver if it is implemented.**"****,**getnodecount =>**"**number of explored branch-and-bound nodes**,**if it is implemented.**"****,**getobjbound =>**"**best known bound on the optimal objective value --- e.g.**,**when branch-and-bound method stops early**"****,**getobjectivevalue =>**"**objective function value**"****,**getobjectivesense =>**"**min or max**"****,**getobjectivebound =>**"**best known bound on the objective**"****,**getobjgap =>**"**final relative optimality gap as optimization terminated. That is**,**it returns |b−f||f|**,**where b is the best bound and f is the best feasible objective value.**"****,**getsimplexiter =>**"**cumulative number of simplex iterations during the optimization process. In particular**,**for a MIP ittotal simplex iterations for all nodes.**"****,**getbarrieriter =>**"**cumulative number of barrier iterations during the optimization process.**"****)**; for k in keys**(**jumpfuns**)**funresult= try k**(**mymodel**)**catch**"**not implemented**"**end println**(****"**|**"****,**k**,****"**|**"****,**funresult**,****"**|**"****,**jumpfuns**[**k**]****,****"**|**"****)**end#for end##function## describeModel**(**mymodel**)**

provides

- snippet.text
[download only julia statements] Max 5 x + 3 y Subject to x + 5 y ≤ 3 0 ≤ x ≤ 2 0 ≤ y ≤ 30

Variable | Value | Explanation |
---|---|---|

MathProgBase.SolverInterface.numvar | 2 | number of variables |

MathProgBase.SolverInterface.numlinconstr | 1 | number of linear constraints |

MathProgBase.SolverInterface.numquadconstr | 0 | number of quadratic constraints |

MathProgBase.SolverInterface.numconstr | 1 | total number of constraints |

JuMP.numsocconstr | 0 | number of second order cone constraints |

JuMP.numsosconstr | 0 | number of sos constraints |

JuMP.numsdconstr | 0 | number of semi-definite constraints |

JuMP.numnlconstr | 0 | number of nonlinear constraints |

MathProgBase.SolverInterface.getsolvetime | not implemented | solve time reported by the solver if it is implemented. |

MathProgBase.SolverInterface.getnodecount | not implemented | number of explored branch-and-bound nodes, if it is implemented. |

MathProgBase.SolverInterface.getobjbound | not implemented | best known bound on the optimal objective value — e.g., when branch-and-bound method stops early |

JuMP.getobjectivevalue | 10.6 | objective function value |

JuMP.getobjectivesense | Max | min or max |

JuMP.getobjectivebound | NaN | best known bound on the objective |

MathProgBase.SolverInterface.getobjgap | not implemented | final relative optimality gap as optimization terminated. |

MathProgBase.SolverInterface.getsimplexiter | not implemented | cumulative number of simplex iterations during the optimization process. |

MathProgBase.SolverInterface.getbarrieriter | not implemented | cumulative number of barrier iterations during the optimization process. |

There are other model variables, like getrawsolver (returns an object that may be used to access a solver-specific API“), but they are less useful. At the moment, the `getvalue()`

functions strangely do not require the model input.

JuMP is not capable of choosing the best problem for the solver. This task is left to the user:

- snippet.juliarepl
julia> using JuMP

**,**Clp julia> mymodel = Model**(**solver = ClpSolver**(****)****)**; julia> @variable**(**mymodel**,**0 <= x <= 2**)**; julia> @variable**(**mymodel**,**0 <= y <= 30**)**; julia> @objective**(**mymodel**,**Max**,**5*x^2 + 3*y^2+x*y**)**; julia> status = solve**(**mymodel**)**## now find the solution ERROR: Solver does not support quadratic objectives Stacktrace:**[**1**]**setquadobjterms!**(**::Clp.ClpMathProgSolverInterface.ClpMathProgModel**,**::Array**{**Int32**,**1**}****,**::Array**{**Int32**,**1**}****,**::Array**{**Float64**,**1**}****)**at /Users/ivo/.julia/v0.6/MathProgBase/src/SolverInterface/LinearQuadratic.jl:71**[**2**]**addQuadratics**(**::JuMP.Model**)**at /Users/ivo/.julia/v0.6/JuMP/src/solvers.jl:435**[**3**]**#build#119**(**::Bool**,**::Bool**,**::JuMP.ProblemTraits**,**::Function**,**::JuMP.Model**)**at /Users/ivo/.julia/v0.6/JuMP/src/solvers.jl:373**[**4**]****(**::JuMP.#kw##build**)****(**::Array**{**Any**,**1**}****,**::JuMP.#build**,**::JuMP.Model**)**at ./<missing>:0**[**5**]**#solve#116**(**::Bool**,**::Bool**,**::Bool**,**::Array**{**Any**,**1**}****,**::Function**,**::JuMP.Model**)**at /Users/ivo/.julia/v0.6/JuMP/src/solvers.jl:168**[**6**]**solve**(**::JuMP.Model**)**at /Users/ivo/.julia/v0.6/JuMP/src/solvers.jl:150

Sometimes, for a non-linear `NLobjective`

, the error message can be more descriptive

- snippet.julianoeval
[download only julia statements] julia> @NLobjective(m, Min, (1-x)^2 + 100(y-x^2)^2) ERROR: No solver was provided. JuMP has classified this model as NLP. Julia packages which provide solvers for this class of problems include Ipopt, KNITRO, and Mosek. The solver must be specified by using either the "solver=" keyword argument to "Model()" or the "setsolver()" method.

Again, we are not (yet) recommending JuMP for unconstrained non-linear optimization. However, it can work.

Here is a working example:

- snippet.juliarepl
julia> using JuMP

**,**Ipopt julia> m = Model**(**solver = IpoptSolver**(****)****)**; julia> @variable**(**m**,**0 <= x**[**1:2**]**<= 4**)**; julia> @constraint**(**m**,**sum**(**x**)**== 1.5**)**; julia> @constraint**(**m**,**x**[**1**]**<= x**[**2**]****)**; julia> @NLconstraint**(**m**,**x**[**1**]**^2+x**[**2**]**^2 == 2**)**; julia> @objective**(**m**,**Min**,**x**[**1**]***x**[**2**]****)**; julia> solve**(**m**)**; ****************************************************************************** This program contains Ipopt**,**a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Eclipse Public License**(**EPL**)**. For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** This is Ipopt version 3.12.8**,**running with linear solver mumps. NOTE: Other linear solvers might be more efficient**(**see Ipopt documentation**)**. Number of nonzeros in equality constraint Jacobian...: 4 Number of nonzeros in inequality constraint Jacobian.: 2 Number of nonzeros in Lagrangian Hessian.............: 3 Total number of variables............................: 2 variables with only lower bounds: 0 variables with lower and upper bounds: 2 variables with only upper bounds: 0 Total number of equality constraints.................: 2 Total number of inequality constraints...............: 1 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 1 iter objective inf_pr inf_du lg**(**mu**)**||d|| lg**(**rg**)**alpha_du alpha_pr ls 0 9.9999800e–05 2.00e+00 1.00e+00 –1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 5.9107754e–01 8.12e–01 5.58e+08 –1.0 7.96e–01 - 1.40e–02 1.00e+00f 1 2 1.1193824e–02 3.33e–01 1.05e+10 –1.0 1.19e+01 - 6.14e–03 1.22e-01f 1 3 1.1624711e–01 1.75e–02 1.26e+09 –1.0 1.84e–01 - 9.87e–01 1.00e+00f 1 4 1.2495708e–01 8.58e–05 1.03e+07 –1.0 1.31e–02 - 1.00e+00 1.00e+00h 1 5 1.2500000e–01 2.11e–09 5.07e+02 –1.0 6.49e–05 - 1.00e+00 1.00e+00h 1 Number of Iterations....: 5**(**scaled**)****(**unscaled**)**Objective...............: 1.2499999894745066e–01 1.2499999894745066e–01 Dual infeasibility......: 6.6613381477509392e–16 6.6613381477509392e–16 Constraint violation....: 2.1050983178838578e–09 2.1050983178838578e–09 Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00 Overall NLP error.......: 2.1050983178838578e–09 2.1050983178838578e–09 Number of objective function evaluations = 6 Number of objective gradient evaluations = 6 Number of equality constraint evaluations = 6 Number of inequality constraint evaluations = 6 Number of equality constraint Jacobian evaluations = 6 Number of inequality constraint Jacobian evaluations = 6 Number of Lagrangian Hessian evaluations = 5 Total CPU secs in IPOPT**(**w/o function evaluations**)**= 0.116 Total CPU secs in NLP function evaluations = 0.024 EXIT: Optimal Solution Found. julia> getvalue**(**x**)****,**getobjectivevalue**(**m**)****(****[**0.0885622**,**1.41144**]****,**0.12499999894745066**)**

This example used to work, but no longer does. It complains about the model syntax.

- snippet.julianoeval
[download only julia statements] julia> using JuMP, Ipopt julia> function testme(n) myf(a...) = sum(collect(a).^2) ## the simplest quadratic function is to be optimized m= JuMP.Model(solver=Ipopt.IpoptSolver()) JuMP.register(m, :myf, n, myf, autodiff=true) ## important: myf needs to be registered @JuMP.variable(m, x[1:(n-1)] >= 0.5) JuMP.setNLobjective(m, :Min, Expr(:call, :myf, [x[i] for i=1:n]...)) JuMP.solve(m) return [getvalue(x[i]) for i=1:n] end## testme (generic function with 1 method) julia> testme(3) ****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Eclipse Public License (EPL). For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** This is Ipopt version 3.12.8, running with linear solver mumps. NOTE: Other linear solvers might be more efficient (see Ipopt documentation). Number of nonzeros in equality constraint Jacobian...: 0 Number of nonzeros in inequality constraint Jacobian.: 0 Number of nonzeros in Lagrangian Hessian.............: 0 Total number of variables............................: 3 variables with only lower bounds: 3 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 0 Total number of inequality constraints...............: 0 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 7.8029997e-01 0.00e+00 2.00e-02 0.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 7.5030000e-01 0.00e+00 9.70e-03 -8.0 1.01e-02 - 1.00e+00 9.80e-01f 1 2 7.5000291e-01 0.00e+00 1.11e-15 -10.0 9.90e-05 - 1.00e+00 1.00e+00f 1 3 7.4999997e-01 0.00e+00 1.11e-16 -11.0 9.80e-07 - 1.00e+00 1.00e+00f 1 Number of Iterations....: 3 (scaled) (unscaled) Objective...............: 7.4999997003576602e-01 7.4999997003576602e-01 Dual infeasibility......: 1.1102230246251565e-16 1.1102230246251565e-16 Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00 Complementarity.........: 1.1921907667178425e-11 1.1921907667178425e-11 Overall NLP error.......: 1.1921907667178425e-11 1.1921907667178425e-11 Number of objective function evaluations = 4 Number of objective gradient evaluations = 4 Number of equality constraint evaluations = 0 Number of inequality constraint evaluations = 0 Number of equality constraint Jacobian evaluations = 0 Number of inequality constraint Jacobian evaluations = 0 Number of Lagrangian Hessian evaluations = 0 Total CPU secs in IPOPT (w/o function evaluations) = 0.101 Total CPU secs in NLP function evaluations = 0.023 EXIT: Optimal Solution Found. 3-element Array{Float64,1}: 0.5 0.5 0.5

The following problem feeds a kinky unconstrained function to JuMP. JuMP returns an incorrect answer.

- snippet.julianoeval
[download only julia statements] using JuMP, Ipopt nearneghyperbola( a... ) = prod( -1.0./(abs.( collect(a) .+ 3) + 1e-2) ) function learn_nl_jump(userfun, n) mymdl= JuMP.Model( solver=Ipopt.IpoptSolver() ) JuMP.register( mymdl, :userfun, n, userfun, autodiff=true ) @JuMP.variable( mymdl, -100 <= x[1:n] <= 100 ) JuMP.setNLobjective( mymdl, :Min, Expr(:call, :userfun, [x[i] for i=1:n]...) ) JuMP.solve( mymdl ) return [ getvalue(x[i]) for i=1:n ] ## why does getvalue not refer to mymdl?? end xopt= learn_nl_jump(nearneghyperbola, 3) println( "x^* =: ", xopt, "\n") println( "minimum f(x^*)= ", nearneghyperbola(xopt...) ) println( "f(neg 3s)= ", nearneghyperbola(-3, -3, -3) )

The obvious solution is `( -3, -3, -3 )`

for n=3. However, while it would have been ok for JuMP to report failure, it is more worrisome that JuMP failed to recognize that it had gone off the deep end:

- snippet.text
[download only julia statements] EXIT: Solved To Acceptable Level. x^* =: [-99.9897, -99.9897, 94.0] minimum f(x^*)= -1.0955764466765225e-6 f(neg 3s)= -1.0e6

- See also IntervalOptimisation.jl.
- See also Optim.jl
- See also NLopt.jl

jump.txt · Last modified: 2018/11/22 20:48 (external edit)