# Nonlinear regression

In statistics, nonlinear regression is the problem of inference for a model

based on multidimensional , data, where is some nonlinear function with respect to unknown parameters θ. At a minimum, we may like to obtain the parameter values associated with the best fitting curve (usually, least squares). (See the article on curve fitting.) Also, statistical inference may be needed, such as confidence intervals for parameters, or a test of whether or not the fitted model agrees well with the data.

The scope of nonlinear regression is clarified by considering the case of polynomial regression, which actually is best not treated as a case of nonlinear regression. When takes a form such as

our function is nonlinear as a function of but it is linear as a function of unknown parameters , , and . The latter is the sense of "linear" in the context of statistical regression modeling. The appropriate computational procedures for polynomial regression are procedures of (multiple) linear regression with two predictor variables and say. However, on occasion it is suggested that nonlinear regression is needed for fitting polynomials. Practical consequences of the misunderstanding include that a nonlinear optimization procedure may be used when the solution is actually available in closed form. Also, capabilities for linear regression are likely to be more comprehensive in some software than capabilities related to nonlinear regression.

## General

### Linearization

Some nonlinear regression problems can be linearized by a suitable transformation of the model formulation.

For example, consider the nonlinear regression problem (ignoring the error):

If we take a logarithm of both sides, it becomes

suggesting estimation of the unknown parameters by a linear regression of ln(y) on x, a computation that does not require iterative optimization. However, use of linearization requires caution. The influences of the data values will change, as will the error structure of the model and the interpretation of any inferential results. These may not be desired effects. On the other hand, depending on what the largest source of error is, linearization may make your errors be distributed in a normal fashion, so the choice to perform linearization must be informed by modeling considerations.

"Linearization" as used here is not to be confused with the local linearization involved in standard algorithms such as the Gauss-Newton algorithm. Similarly, the methodology of generalized linear models does not involve linearization for parameter estimation.

### Ordinary and weighted least squares

The best-fit curve is often assumed to be that which minimizes the sum of squared deviations (residuals), SSR say. This is the (ordinary) least squares (OLS) approach. However, in cases where there are different error variances for different errors, a sum of weighted squared residuals may be minimized, SSWR say, the weighted least squares (WLS) criterion. In practice the variance may depend on the fitted mean. Then in practice weights may be recomputed on each iteration, in an iteratively weighted least squares algorithm.

In general, there is no closed-form expression for the best-fitting parameters, as there is in linear regression. Usually numerical optimization algorithms are applied to determine the best-fitting parameters. Again in contrast to linear regression, there may be many local maxima of the function to be optimized. In practice, guess values of the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global maximum.

### Monte Carlo estimation of errors

If the error of each data point is known, then the reliability of the parameters can be estimated by Monte Carlo simulation. Each data point is randomized according to its mean and standard deviation, the curve is fitted and parameters recorded. The data points are then randomized again and new parameters determined. In time, many sets of parameters will be generated and their mean and standard deviation can be calculated.[1][2]

## References

• G.A.F Seber and C.J. Wild. Nonlinear Regression. New York: John Wiley and Sons, 1989.
• R.M. Bethea, B.S. Duran and T.L. Boullion. Statistical Methods for Engineers and Scientists. New York: Marcel Dekker, Inc 1985 ISBN 0-8247-7227-X
1. ^ Motulsky, HJ & Ransnas, LA (1987) Fitting curves to data using nonlinear regression. FASEB J 1:365-374
2. ^ McIntosh, JEA & McIntosh, RP (1980) Mathematical modelling and computers in endocrinology. p71 Springer-Verlag, Berlin, Germany.

• NLINLS, Nonlinear least squares by differential evolution method of global optimization: a Fortran program
• ISAT, Nonlinear regression with explicit error control
• Zunzun.com, Online curve and surface fitting
• NLREG, a proprietary program
• Matlab statistic
• simplemax.net, online optimization service

Statistics is a mathematical science pertaining to the collection, analysis, interpretation or explanation, and presentation of data. It is applicable to a wide variety of academic disciplines, from the physical and social sciences to the humanities.
nonlinear system is a system which is not linear i.e. a system which does not satisfy the superposition principle. Less technically, a nonlinear system is any problem where the variable(s) to be solved for cannot be written as a linear sum of independent components.
Curve fitting is finding a curve which matches a series of data points and possibly other constraints. This section is an introduction to both interpolation (where an exact fit to constraints is expected) and regression analysis. Both are sometimes used for extrapolation.
In statistics, linear regression is a regression method that models the relationship between a dependent variable Y, independent variables Xi, i = 1, ..., p, and a random term ε.
In mathematics, the Gauss-Newton algorithm is used to solve nonlinear least squares problems. It is a modification of Newton's method for optimizing a function (that is, finding a local extremum).
generalized linear model (GLM) is a useful generalization of ordinary least squares regression. It relates the random distribution of the measured variable of the experiment (the distribution function
In regression analysis, least squares, also known as ordinary least squares analysis, is a method for linear regression that determines the values of unknown quantities in a statistical model by minimizing the sum of the residuals (the difference between the predicted and
Weighted least squares is a method of regression, similar to least squares in that it uses the same minimization of the sum of the residuals:

However, instead of weighting all points equally, they are weighted
In statistics, linear regression is a regression method that models the relationship between a dependent variable Y, independent variables Xi, i = 1, ..., p, and a random term ε.
In mathematics, the term optimization, or mathematical programming, refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an allowed set.
maxima and minima, known collectively as extrema, are the largest value (maximum) or smallest value (minimum), that a function takes in a point either within a given neighbourhood (local extremum) or on the function domain in its entirety (global
A guess value is more commonly called a starting value or initial value. These are necessary for most nonlinear regression search algorithms, because those algorithms are mainly deterministic and iterative, and they need to start somewhere.
Monte Carlo methods are a widely used class of computational algorithms for simulating the behavior of various physical and mathematical systems, and for other computations.
In mathematics and computing, the Levenberg-Marquardt algorithm (or LMA) provides a numerical solution to the problem of minimizing a function, generally nonlinear, over a space of parameters of the function.
Statistics is a mathematical science pertaining to the collection, analysis, interpretation or explanation, and presentation of data. It is applicable to a wide variety of academic disciplines, from the physical and social sciences to the humanities.
Descriptive statistics are used to describe the basic features of the data in a study. They provide simple summaries about the sample and the measures. Together with simple graphics analysis, they form the basis of virtually every quantitative analysis of data.
In statistics, mean has two related meanings:
• the arithmetic mean (and is distinguished from the geometric mean or harmonic mean).
• the expected value of a random variable, which is also called the population mean.

In mathematics and statistics, the arithmetic mean (or simply the mean) of a list of numbers is the sum of all the members of the list divided by the number of items in the list. The arithmetic mean is what students are taught very early to call the "average".
The geometric mean of a collection of positive data is defined as the nth root of the product of all the members of the data set, where n is the number of members.
median is described as the number separating the higher half of a sample, a population, or a probability distribution, from the lower half. The median of a finite list of numbers can be found by arranging all the observations from lowest value to highest value and picking
In statistics, mode means the most frequent value assumed by a random variable, or occurring in a sampling of a random variable. The term is applied both to probability distributions and to collections of experimental data.
The power of a statistical test is the probability that the test will reject a false null hypothesis (that it will not make a Type II error). As power increases, the chances of a Type II error decrease, and vice versa. The probability of a Type II error is referred to as β.
variance of a random variable (or somewhat more precisely, of a probability distribution) is one measure of statistical dispersion, averaging the squared distance of its possible values from the expected value.
In probability and statistics, the standard deviation of a probability distribution, random variable, or population or multiset of values is a measure of the spread of its values. It is usually denoted with the letter σ (lower case sigma).
Inferential statistics or statistical induction comprises the use of statistics to make inferences concerning some unknown aspect of a population. It is distinguished from descriptive statistics.
statistical hypothesis test, or more briefly, hypothesis test, is an algorithm to state the alternative (for or against the hypothesis) which minimizes certain risks.

In statistics, a result is called significant if it is unlikely to have occurred by chance. "A statistically significant difference" simply means there is statistical evidence that there is a difference; it does not mean the difference is necessarily large, important or significant