sumt {maxLik} | R Documentation |
Sequentially Unconstrained Maximization Technique (SUMT) based optimization for linear equality constraints.
This implementation is mostly intended to be called from other
maximization routines, such as maxNR
.
sumt(fn, grad=NULL, hess=NULL, start, maxRoutine, constraints, SUMTTol = sqrt(.Machine$double.eps), SUMTPenaltyTol = sqrt(.Machine$double.eps), SUMTQ = 10, SUMTRho0 = NULL, print.level = 0, SUMTMaxIter = 100, ...)
fn |
function of a (single) vector parameter. The function may have more arguments, but those are not treated as parameter |
grad |
gradient function of |
hess |
hessian matrix of the |
start |
initial value of the parameter. |
maxRoutine |
maximization algorithm |
constraints |
list, information for constrained maximization.
Currently two components are supported: |
SUMTTol |
stopping condition. If the coefficient of successive outer iterations are close enough, i.e. maximum of the absolute value over the component difference is smaller than SUMTTol, the algorithm stops. Note this does not necessarily mean satisfying the constraints. In case of the penalty function is too 'weak', SUMT may repeatedly find the same optimum. In that case a warning is issued. The user may try to set SUMTTol to a lower value, e.g. to zero. |
SUMTPenaltyTol |
stopping condition. If barrier value (also called penalty)
t(A %*% beta
+ B) %*% (A %*% beta + B) is less than
|
SUMTQ |
a double greater than one controlling the growth of the |
SUMTRho0 |
Initial value for One should consider supplying |
print.level |
Integer, debugging information. Larger number print more details. |
SUMTMaxIter |
Maximum SUMT iterations |
... |
Other arguments to |
The Sequential Unconstrained Minimization Technique is a heuristic for constrained optimization. To minimize a function f subject to constraints, one employs a non-negative function P penalizing violations of the constraints, such that P(x) is zero iff x satisfies the constraints. One iteratively minimizes L(x) + rho_k P(x), where the rho values are increased according to the rule rho_{k+1} = q rho_k for some constant q > 1, until convergence is obtained in the sense that the barrier value P(x)'P(x) is close to zero. Note that there is no guarantee that global (approximately) constrained optima are found. Standard practice would recommend to use the best solution found in "sufficiently many" replications of the algorithm.
The unconstrained minimizations are carried out by either any of
the maximization algorithms in the maxLik, such as
maxNR
. Analytic gradient and hessian are used if
provided, numeric ones otherwise.
Object of class 'maxim'. In addition, a component
constraints |
A list, describing the constrained optimization. Includes the following components:
|
It may be a lot more efficient to embrace the actual function to be optimized to an outer function, which calculates the actual parameters based on a smaller set of parameters and the constraints.
Ott Toomet otoomet@ut.ee, Arne Henningsen