sumt {maxLik}R Documentation

Equality-constrained optimization

Description

Sequentially Unconstrained Maximization Technique (SUMT) based optimization for linear equality constraints.

This implementation is mostly intended to be called from other maximization routines, such as maxNR.

Usage

sumt(fn, grad=NULL, hess=NULL,
start,
maxRoutine, constraints,
SUMTTol = sqrt(.Machine$double.eps),
SUMTPenaltyTol = sqrt(.Machine$double.eps),
SUMTQ = 10,
SUMTRho0 = NULL,
print.level = 0, SUMTMaxIter = 100, ...)

Arguments

fn

function of a (single) vector parameter. The function may have more arguments, but those are not treated as parameter

grad

gradient function of fn. NULL if missing

hess

hessian matrix of the fn. NULL if missing

start

initial value of the parameter.

maxRoutine

maximization algorithm

constraints

list, information for constrained maximization. Currently two components are supported: eqA and eqB for linear equality constraints: A %*% beta + B = 0. The user must ensure that the matrices A and B are conformable.

SUMTTol

stopping condition. If the coefficient of successive outer iterations are close enough, i.e. maximum of the absolute value over the component difference is smaller than SUMTTol, the algorithm stops.

Note this does not necessarily mean satisfying the constraints. In case of the penalty function is too 'weak', SUMT may repeatedly find the same optimum. In that case a warning is issued. The user may try to set SUMTTol to a lower value, e.g. to zero.

SUMTPenaltyTol

stopping condition. If barrier value (also called penalty) t(A %*% beta + B) %*% (A %*% beta + B) is less than SUMTTol, the algorithm stops

SUMTQ

a double greater than one controlling the growth of the rho as described in Details. Defaults to 10.

SUMTRho0

Initial value for rho. If not specified, a (possibly) suitable value is selected. See Details.

One should consider supplying SUMTRho0 in case where the unconstrained problem does not have a maximum, or the maximum is too far from the constrained value. Otherwise the algorithm may pick values too to achive convergence.

print.level

Integer, debugging information. Larger number print more details.

SUMTMaxIter

Maximum SUMT iterations

...

Other arguments to maxRoutine and fn.

Details

The Sequential Unconstrained Minimization Technique is a heuristic for constrained optimization. To minimize a function f subject to constraints, one employs a non-negative function P penalizing violations of the constraints, such that P(x) is zero iff x satisfies the constraints. One iteratively minimizes L(x) + rho_k P(x), where the rho values are increased according to the rule rho_{k+1} = q rho_k for some constant q > 1, until convergence is obtained in the sense that the barrier value P(x)'P(x) is close to zero. Note that there is no guarantee that global (approximately) constrained optima are found. Standard practice would recommend to use the best solution found in "sufficiently many" replications of the algorithm.

The unconstrained minimizations are carried out by either any of the maximization algorithms in the maxLik, such as maxNR. Analytic gradient and hessian are used if provided, numeric ones otherwise.

Value

Object of class 'maxim'. In addition, a component

constraints

A list, describing the constrained optimization. Includes the following components:

  • typetype of constrained optimization

  • barrier.valuevalue of the penalty function at maximum

  • codecode for the stopping condition

  • messagea short message, describing the stopping condition

  • outer.iterationsnumber of iterations in the SUMT step

Note

It may be a lot more efficient to embrace the actual function to be optimized to an outer function, which calculates the actual parameters based on a smaller set of parameters and the constraints.

Author(s)

Ott Toomet otoomet@ut.ee, Arne Henningsen

See Also

sumt


[Package maxLik version 1.1-0 Index]