Quantcast

Documentation Center

  • Trial Software
  • Product Updates

What Are Linear Regression Models?

Regression models describe the relationship between a dependent variable, y, and independent variable or variables, X. The dependent variable is also called the response variable. Independent variables are also called explanatory or predictor variables. Continuous predictor variables might be called covariates, whereas categorical predictor variables might be also referred to as factors. The matrix, X, of observations on predictor variables is usually called the design matrix.

A multiple linear regression model is

where

  • yi is the ith response.

  • βk is the kth coefficient, where β0 is the constant term in the model. Sometimes, design matrices might include information about the constant term. However, fitlm or stepwiselm by default includes a constant term in the model, so you must not enter a column of 1s into your design matrix X.

  • Xij is the ith observation on the jth predictor variable, j = 1, ..., p.

  • εi is the ith noise term, that is, random error.

In general, a linear regression model can be a model of the form

where f (.) is a scalar-valued function of the independent variables, Xijs. The functions, f (X), might be in any form including nonlinear functions or polynomials. The linearity, in the linear regression models, refers to the linearity of the coefficients βk. That is, the response variable, y, is a linear function of the coefficients, βk.

Some examples of linear models are:

The following, however, are not linear models since they are not linear in the unknown coefficients, βk.

The usual assumptions for linear regression models are:

  • The noise terms, εi, are uncorrelated.

  • The noise terms, εi, have independent and identical normal distributions with mean zero and constant variance, σ2. Thus

    and

    So the variance of yi is the same for all levels of Xij.

  • The responses yi are uncorrelated.

The fitted linear function is

where is the estimated response and bks are the fitted coefficients. The coefficients are estimated so as to minimize the mean squared difference between the prediction vector bf(X) and the true response vector y, that is . This method is called the method of least squares. Under the assumptions on the noise terms, these coefficients also maximize the likelihood of the prediction vector.

In a linear regression model of the form y = β1X1 + β2X2 + ... + βpXp, the coefficient βk expresses the impact of a one-unit change in predictor variable, Xj, on the mean of the response, E(y) provided that all other variables are held constant. The sign of the coefficient gives the direction of the effect. For example, if the linear model is E(y) = 1.8 – 2.35X1 + X2, then –2.35 indicates a 2.35 unit decrease in the mean response with a one-unit increase in X1, given X2 is held constant. If the model is E(y) = 1.1 + 1.5X12 + X2, the coefficient of X12 indicates a 1.5 unit increase in the mean of Y with a one-unit increase in X12 given all else held constant. However, in the case of E(y) = 1.1 + 2.1X1 + 1.5X12, it is difficult to interpret the coefficients similarly, since it is not possible to hold X1 constant when X12 changes or vice versa.

References

[1] Neter, J., M. H. Kutner, C. J. Nachtsheim, and W. Wasserman. Applied Linear Statistical Models. IRWIN, The McGraw-Hill Companies, Inc., 1996.

[2] Seber, G. A. F. Linear Regression Analysis. Wiley Series in Probability and Mathematical Statistics. John Wiley and Sons, Inc., 1977.

See Also

| |

Related Examples

Was this topic helpful?