Least Squares Estimation



Least Squares Estimation

Following the specification of the model and the collection of data, the next stage is to identify “good” estimates of β0 and β1 for the simple linear regression model that best describes the data from a scientific experiment.

Parameter estimate or model fitting are other terms for the same thing. The least squares approach is the most often used estimating method. The least squares method produces estimators with acceptable features under specified assumptions. The least squares approach and its modifications will be the focus of this blog (e.g., weighted least squares). Other estimating methods may be superior to least squares in some cases (for example, when one or more of the assumptions are violated). The maximum likelihood approach, the ridge method, and the main components method are the other estimation methods discussed in this book.

The least squares principle for the simple linear regression model is to find the estimates b0 and b1 such that the sum of the squared distance from actual response yi and predicted response ˆyi = β0 + β1xi reaches the minimum among all possible choices of regression coefficients β0 and β1..

The least squares method is used to find parameter estimates by selecting the regression line that is "closest" to all data points (xi, yi).

It is more convenient to solve for b0 and b1 using the formula below:

thus, the fitted value of the simple linear regression is defined as ˆyi = b0 + b1xi.


On this topic, your comments/suggestions are highly appreciated. Type it in the comment section below. You can follow to this blog to receive notifications of new posts.




Post a Comment

0 Comments