optim: General-purpose Optimization (2024)

optimR Documentation

Description

General-purpose optimization based on Nelder–Mead, quasi-Newton andconjugate-gradient algorithms. It includes an option forbox-constrained optimization and simulated annealing.

Usage

optim(par, fn, gr = NULL, ..., method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN", "Brent"), lower = -Inf, upper = Inf, control = list(), hessian = FALSE)optimHess(par, fn, gr = NULL, ..., control = list())

Arguments

par

Initial values for the parameters to be optimized over.

fn

A function to be minimized (or maximized), with firstargument the vector of parameters over which minimization is to takeplace. It should return a scalar result.

gr

A function to return the gradient for the "BFGS","CG" and "L-BFGS-B" methods. If it is NULL, afinite-difference approximation will be used.

For the "SANN" method it specifies a function to generate a newcandidate point. If it is NULL a default Gaussian Markovkernel is used.

...

Further arguments to be passed to fn and gr.

method

The method to be used. See ‘Details’. Can be abbreviated.

lower, upper

Bounds on the variables for the "L-BFGS-B"method, or bounds in which to search for method "Brent".

control

a list of control parameters. See ‘Details’.

hessian

Logical. Should a numerically differentiated Hessianmatrix be returned?

Details

Note that arguments after ... must be matched exactly.

By default optim performs minimization, but it will maximizeif control$fnscale is negative. optimHess is anauxiliary function to compute the Hessian at a later stage ifhessian = TRUE was forgotten.

The default method is an implementation of that of Nelder and Mead(1965), that uses only function values and is robust but relativelyslow. It will work reasonably well for non-differentiable functions.

Method "BFGS" is a quasi-Newton method (also known as a variablemetric algorithm), specifically that published simultaneously in 1970by Broyden, Fletcher, Goldfarb and Shanno. This uses function valuesand gradients to build up a picture of the surface to be optimized.

Method "CG" is a conjugate gradients method based on that byFletcher and Reeves (1964) (but with the option of Polak–Ribiere orBeale–Sorenson updates). Conjugate gradient methods will generallybe more fragile than the BFGS method, but as they do not store amatrix they may be successful in much larger optimization problems.

Method "L-BFGS-B" is that of Byrd et. al. (1995) whichallows box constraints, that is each variable can be given a lowerand/or upper bound. The initial value must satisfy the constraints.This uses a limited-memory modification of the BFGS quasi-Newtonmethod. If non-trivial bounds are supplied, this method will beselected, with a warning.

Nocedal and Wright (1999) is a comprehensive reference for theprevious three methods.

Method "SANN" is by default a variant of simulated annealinggiven in Belisle (1992). Simulated-annealing belongs to the class ofstochastic global optimization methods. It uses only function valuesbut is relatively slow. It will also work for non-differentiablefunctions. This implementation uses the Metropolis function for theacceptance probability. By default the next candidate point isgenerated from a Gaussian Markov kernel with scale proportional to theactual temperature. If a function to generate a new candidate point isgiven, method "SANN" can also be used to solve combinatorialoptimization problems. Temperatures are decreased according to thelogarithmic cooling schedule as given in Belisle (1992, p. 890);specifically, the temperature is set totemp / log(((t-1) %/% tmax)*tmax + exp(1)), where t isthe current iteration step and temp and tmax arespecifiable via control, see below. Note that the"SANN" method depends critically on the settings of the controlparameters. It is not a general-purpose method but can be very usefulin getting to a good value on a very rough surface.

Method "Brent" is for one-dimensional problems only, usingoptimize(). It can be useful in cases whereoptim() is used inside other functions where only methodcan be specified, such as in mle from package stats4.

Function fn can return NA or Inf if the functioncannot be evaluated at the supplied value, but the initial value musthave a computable finite value of fn.(Except for method "L-BFGS-B" where the values should always befinite.)

optim can be used recursively, and for a single parameteras well as many. It also accepts a zero-length par, and justevaluates the function with that argument.

The control argument is a list that can supply any of thefollowing components:

trace

Non-negative integer. If positive,tracing information on theprogress of the optimization is produced. Higher values mayproduce more tracing information: for method "L-BFGS-B"there are six levels of tracing. (To understand exactly whatthese do see the source code: higher levels give more detail.)

fnscale

An overall scaling to be applied to the valueof fn and gr during optimization. If negative,turns the problem into a maximization problem. Optimization isperformed on fn(par)/fnscale.

parscale

A vector of scaling values for the parameters.Optimization is performed on par/parscale and these should becomparable in the sense that a unit change in any element producesabout a unit change in the scaled value. Not used (nor needed)for method = "Brent".

ndeps

A vector of step sizes for the finite-differenceapproximation to the gradient, on par/parscalescale. Defaults to 1e-3.

maxit

The maximum number of iterations. Defaults to100 for the derivative-based methods, and500 for "Nelder-Mead".

For "SANN" maxit gives the total number of functionevaluations: there is no other stopping criterion. Defaults to10000.

abstol

The absolute convergence tolerance. Onlyuseful for non-negative functions, as a tolerance for reaching zero.

reltol

Relative convergence tolerance. The algorithmstops if it is unable to reduce the value by a factor ofreltol * (abs(val) + reltol) at a step. Defaults tosqrt(.Machine$double.eps), typically about 1e-8.

alpha, beta, gamma

Scaling parametersfor the "Nelder-Mead" method. alpha is the reflectionfactor (default 1.0), beta the contraction factor (0.5) andgamma the expansion factor (2.0).

REPORT

The frequency of reports for the "BFGS","L-BFGS-B" and "SANN" methods if control$traceis positive. Defaults to every 10 iterations for "BFGS" and"L-BFGS-B", or every 100 temperatures for "SANN".

warn.1d.NelderMead

a logical indicatingif the (default) "Nelder-Mean" method should signal awarning when used for one-dimensional minimization. As thewarning is sometimes inappropriate, you can suppress it by settingthis option to false.

type

for the conjugate-gradients method. Takes value1 for the Fletcher–Reeves update, 2 forPolak–Ribiere and 3 for Beale–Sorenson.

lmm

is an integer giving the number of BFGS updatesretained in the "L-BFGS-B" method, It defaults to 5.

factr

controls the convergence of the "L-BFGS-B"method. Convergence occurs when the reduction in the objective iswithin this factor of the machine tolerance. Default is 1e7,that is a tolerance of about 1e-8.

pgtol

helps control the convergence of the "L-BFGS-B"method. It is a tolerance on the projected gradient in the currentsearch direction. This defaults to zero, when the check issuppressed.

temp

controls the "SANN" method. It is thestarting temperature for the cooling schedule. Defaults to10.

tmax

is the number of function evaluations at eachtemperature for the "SANN" method. Defaults to 10.

Any names given to par will be copied to the vectors passed tofn and gr. Note that no other attributes of parare copied over.

The parameter vector passed to fn has special semantics and maybe shared between calls: the function should not change or copy it.

Value

For optim, a list with components:

par

The best set of parameters found.

value

The value of fn corresponding to par.

counts

A two-element integer vector giving the number of callsto fn and gr respectively. This excludes those calls neededto compute the Hessian, if requested, and any calls to fn tocompute a finite-difference approximation to the gradient.

convergence

An integer code. 0 indicates successfulcompletion (which is always the case for "SANN" and"Brent"). Possible error codes are

1

indicates that the iteration limit maxithad been reached.

10

indicates degeneracy of the Nelder–Mead simplex.

51

indicates a warning from the "L-BFGS-B"method; see component message for further details.

52

indicates an error from the "L-BFGS-B"method; see component message for further details.

message

A character string giving any additional informationreturned by the optimizer, or NULL.

hessian

Only if argument hessian is true. A symmetricmatrix giving an estimate of the Hessian at the solution found. Notethat this is the Hessian of the unconstrained problem even if thebox constraints are active.

For optimHess, the description of the hessian componentapplies.

Note

optim will work with one-dimensional pars, but thedefault method does not work well (and will warn). Method"Brent" uses optimize and needs bounds to be available;"BFGS" often works well enough if not.

Source

The code for methods "Nelder-Mead", "BFGS" and"CG" was based originally on Pascal code in Nash (1990) that wastranslated by p2c and then hand-optimized. Dr Nash has agreedthat the code can be made freely available.

The code for method "L-BFGS-B" is based on Fortran code by Zhu,Byrd, Lu-Chen and Nocedal obtained from Netlib (file‘opt/lbfgs_bcm.shar’: another version is in ‘toms/778’).

The code for method "SANN" was contributed by A. Trapletti.

References

Belisle, C. J. P. (1992).Convergence theorems for a class of simulated annealing algorithms onRd.Journal of Applied Probability, 29, 885–895.\Sexpr[results=rd,stage=build]{tools:::Rd_expr_doi("10.2307/3214721")}.

Byrd, R. H., Lu, P., Nocedal, J. and Zhu, C. (1995).A limited memory algorithm for bound constrained optimization.SIAM Journal on Scientific Computing, 16, 1190–1208.\Sexpr[results=rd,stage=build]{tools:::Rd_expr_doi("10.1137/0916069")}.

Fletcher, R. and Reeves, C. M. (1964).Function minimization by conjugate gradients.Computer Journal 7, 148–154.\Sexpr[results=rd,stage=build]{tools:::Rd_expr_doi("10.1093/comjnl/7.2.149")}.

Nash, J. C. (1990).Compact Numerical Methods for Computers. Linear Algebra andFunction Minimisation.Adam Hilger.

Nelder, J. A. and Mead, R. (1965).A simplex algorithm for function minimization.Computer Journal, 7, 308–313.\Sexpr[results=rd,stage=build]{tools:::Rd_expr_doi("10.1093/comjnl/7.4.308")}.

Nocedal, J. and Wright, S. J. (1999).Numerical Optimization.Springer.

See Also

nlm, nlminb.

optimize for one-dimensional minimization andconstrOptim for constrained optimization.

Examples

require(graphics)fr <- function(x) { ## Rosenbrock Banana function x1 <- x[1] x2 <- x[2] 100 * (x2 - x1 * x1)^2 + (1 - x1)^2}grr <- function(x) { ## Gradient of 'fr' x1 <- x[1] x2 <- x[2] c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1), 200 * (x2 - x1 * x1))}optim(c(-1.2,1), fr)(res <- optim(c(-1.2,1), fr, grr, method = "BFGS"))optimHess(res$par, fr, grr)optim(c(-1.2,1), fr, NULL, method = "BFGS", hessian = TRUE)## These do not converge in the default number of stepsoptim(c(-1.2,1), fr, grr, method = "CG")optim(c(-1.2,1), fr, grr, method = "CG", control = list(type = 2))optim(c(-1.2,1), fr, grr, method = "L-BFGS-B")flb <- function(x) { p <- length(x); sum(c(1, rep(4, p-1)) * (x - c(1, x[-p])^2)^2) }## 25-dimensional box constrainedoptim(rep(3, 25), flb, NULL, method = "L-BFGS-B", lower = rep(2, 25), upper = rep(4, 25)) # par[24] is *not* at boundary## "wild" function , global minimum at about -15.81515fw <- function (x) 10*sin(0.3*x)*sin(1.3*x^2) + 0.00001*x^4 + 0.2*x+80plot(fw, -50, 50, n = 1000, main = "optim() minimising 'wild function'")res <- optim(50, fw, method = "SANN", control = list(maxit = 20000, temp = 20, parscale = 20))res## Now improve locally {typically only by a small bit}:(r2 <- optim(res$par, fw, method = "BFGS"))points(r2$par, r2$value, pch = 8, col = "red", cex = 2)## Combinatorial optimization: Traveling salesman problemlibrary(stats) # normally loadedeurodistmat <- as.matrix(eurodist)distance <- function(sq) { # Target function sq2 <- embed(sq, 2) sum(eurodistmat[cbind(sq2[,2], sq2[,1])])}genseq <- function(sq) { # Generate new candidate sequence idx <- seq(2, NROW(eurodistmat)-1) changepoints <- sample(idx, size = 2, replace = FALSE) tmp <- sq[changepoints[1]] sq[changepoints[1]] <- sq[changepoints[2]] sq[changepoints[2]] <- tmp sq}sq <- c(1:nrow(eurodistmat), 1) # Initial sequence: alphabeticdistance(sq)# rotate for conventional orientationloc <- -cmdscale(eurodist, add = TRUE)$pointsx <- loc[,1]; y <- loc[,2]s <- seq_len(nrow(eurodistmat))tspinit <- loc[sq,]plot(x, y, type = "n", asp = 1, xlab = "", ylab = "", main = "initial solution of traveling salesman problem", axes = FALSE)arrows(tspinit[s,1], tspinit[s,2], tspinit[s+1,1], tspinit[s+1,2], angle = 10, col = "green")text(x, y, labels(eurodist), cex = 0.8)set.seed(123) # chosen to get a good soln relatively quicklyres <- optim(sq, distance, genseq, method = "SANN", control = list(maxit = 30000, temp = 2000, trace = TRUE, REPORT = 500))res # Near optimum distance around 12842tspres <- loc[res$par,]plot(x, y, type = "n", asp = 1, xlab = "", ylab = "", main = "optim() 'solving' traveling salesman problem", axes = FALSE)arrows(tspres[s,1], tspres[s,2], tspres[s+1,1], tspres[s+1,2], angle = 10, col = "red")text(x, y, labels(eurodist), cex = 0.8)## 1-D minimization: "Brent" or optimize() being preferred.. but NM may be ok and "unavoidable",## ---------------- so we can suppress the check+warning :system.time(rO <- optimize(function(x) (x-pi)^2, c(0, 10)))system.time(ro <- optim(1, function(x) (x-pi)^2, control=list(warn.1d.NelderMead = FALSE)))rO$minimum - pi # 0 (perfect), on one platformro$par - pi # ~= 1.9e-4 on one platformutils::str(ro)
optim: General-purpose Optimization (2024)

References

Top Articles
Latest Posts
Article information

Author: The Hon. Margery Christiansen

Last Updated:

Views: 5680

Rating: 5 / 5 (50 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: The Hon. Margery Christiansen

Birthday: 2000-07-07

Address: 5050 Breitenberg Knoll, New Robert, MI 45409

Phone: +2556892639372

Job: Investor Mining Engineer

Hobby: Sketching, Cosplaying, Glassblowing, Genealogy, Crocheting, Archery, Skateboarding

Introduction: My name is The Hon. Margery Christiansen, I am a bright, adorable, precious, inexpensive, gorgeous, comfortable, happy person who loves writing and wants to share my knowledge and understanding with you.