mvpa2.clfs.model_selector.NLP

mvpa2.clfs.model_selector.NLP(*args, **kwargs)

NLP: constructor for general Non-Linear Problem assignment

f(x) -> min (or -> max) subjected to c(x) <= 0 h(x) = 0 A x <= b Aeq x = beq lb <= x <= ub

Examples of valid usage: p = NLP(f, x0, <params as kwargs>) p = NLP(f=objFun, x0 = myX0, <params as kwargs>) p = NLP(f, x0, A=A, df = objFunGradient, Aeq=Aeq, b=b, beq=beq, lb=lb, ub=ub) See also: /examples/nlp_*.py

INPUTS: f: objFun x0: start point, vector of length n

Optional: name: problem name (string), is used in text & graphics output df: user-supplied gradient of objective function c, h - functions defining nonlinear equality/inequality constraints dc, dh - functions defining 1st derivatives of non-linear constraints

A: size m1 x n matrix, subjected to A * x <= b Aeq: size m2 x n matrix, subjected to Aeq * x = beq b, beq: corresponding vectors of lengthes m1, m2 lb, ub: vectors of length n subjected to lb <= x <= ub constraints, may include +/- inf values

iprint = {10}: print text output every <iprint> iteration goal = {‘minimum’} | ‘min’ | ‘maximum’ | ‘max’ - minimize or maximize objective function diffInt = {1e-7} : finite-difference gradient aproximation step, scalar or vector of length nVars scale = {None} : scale factor, see /examples/badlyScaled.py for more details stencil = {1}|2|3: finite-differences derivatives approximation stencil, used by most of solvers (except scipy_cobyla) when no user-supplied for objfun / nonline constraints derivatives are provided

1: (f(x+dx)-f(x))/dx (faster but less precize) 2: (f(x+dx)-f(x-dx))/(2*dx) (slower but more exact) 3: (-f(x+2*dx)+8*f(x+dx)-8*f(x-dx)+f(x-2*dx))/(12*dx) (even more slower, but even more exact)

check.df, check.dc, check.dh: if set to True, OpenOpt will check user-supplied gradients. args (or args.f, args.c, args.h) - additional arguments to objFunc and non-linear constraints,

see /examples/userArgs.py for more details.

contol: max allowed residual in optim point (for any constraint from problem constraints: constraint(x_optim) < contol is required from solver)

stop criteria: maxIter {400} maxFunEvals {1e5} maxCPUTime {inf} maxTime {inf} maxLineSearch {500} fEnough {-inf for min problems, +inf for max problems}:

stop if objFunc vulue better than fEnough and all constraints less than contol

ftol {1e-6}: used in stop criterium || f[iter_k] - f[iter_k+1] || < ftol xtol {1e-6}: used in stop criterium || x[iter_k] - x[iter_k+1] || < xtol gtol {1e-6}: used in stop criteria || gradient(x[iter_k]) || < gtol

callback - user-defined callback function(s), see /examples/userCallback.py

Notes: 1) for more safety default values checking/reassigning (via print p.maxIter / prob.maxIter = 400) is recommended (they may change in future OpenOpt versions and/or not updated in time in the documentation) 2) some solvers may ignore some of the stop criteria above and/or use their own ones 3) for NSP constructor ftol, xtol, gtol defaults may have other values

graphic options: plot = {False} | True : plot figure (now implemented for UC problems only), requires matplotlib installed color = {‘blue’} | black | ... (any valid matplotlib color) specifier = {‘-‘} | ‘–’ | ‘:’ | ‘-.’ - plot specifier show = {True} | False : call pylab.show() after solver finish or not xlim {(nan, nan)}, ylim {(nan, nan)} - initial estimation for graphical output borders (you can use for example p.xlim = (nan, 10) or p.ylim = [-8, 15] or p.xlim=[inf, 15], only real finite values will be taken into account) for constrained problems ylim affects only 1st subplot p.graphics.xlabel or p.xlabel = {‘time’} | ‘cputime’ | ‘iter’ # desired graphic output units in x-axe, case-unsensetive

Note: some Python IDEs have problems with matplotlib!

Also, after assignment NLP instance you may modify prob fields inplace: p.maxIter = 1000 p.df = lambda x: cos(x)

OUTPUT: OpenOpt NLP class instance

Solving of NLPs is performed via r = p.solve(string_name_of_solver) or p.maximize, p.minimize r.xf - desired solution (NaNs if a problem occured) r.ff - objFun value (NaN if a problem occured) (see also other fields, such as CPUTimeElapsed, TimeElapsed, isFeasible, iter etc, via dir(r))

Solvers available for now: single-variable:

goldenSection, scipy_fminbound (latter is not recommended) (both these solvers require finite lb-ub and ignore user-supplied gradient)
unconstrained:
scipy_bfgs, scipy_cg, scipy_ncg, (these ones cannot handle user-provided gradient) scipy_powell and scipy_fmin amsg2p - requires knowing fOpt (optimal value)
box-bounded:
scipy_lbfgsb, scipy_tnc - require scipy installed bobyqa - doesn’t use derivatives; requires http://openopt.org/nlopt installed ptn, slmvm1, slmvm2 - require http://openopt.org/nlopt installed
all constraints:
ralg ipopt (requires ipopt + pyipopt installed) scipy_slsqp scipy_cobyla (this one cannot handle user-supplied gradients) lincher (requires CVXOPT QP solver), gsubg - for large-scaled problems algencan (ver. 2.0.3 or more recent, very powerful constrained solver, GPL, requires ALGENCAN + Python interface installed, see http://www.ime.usp.br/~egbirgin/tango/) mma and auglag - require http://openopt.org/nlopt installed