Linear Regression 3 - Multivariate Linear Regression

The Simple Linear Regression can only handle the relationship between the target feature and one descriptive feature, which is not often the case in real life. For example, the number of features in the dataset of our toy example is now expanded to 4, including target feature Rental Price:

SizeRental PriceFloorNumber of bedroom
3501840151
410168232
430155571
550260942

To generalize simple linear regression to multivariate linear regression is straightforward. We just need to add other parameters to the model as below

$$M_w(x) = w_0x_0 + w_1x_1+ w_2x_2+ \dots + w_nx_n=\sum^n_{j=0}w_jx_j=\mathbf{w^Tx}$$

where \(w_j\) is the weight of feature \(j\) and \(x_0\) is always equal to 1. Both \(\mathbf{w}\) and \(\mathbf{x}\) are (n+1) dimension real number vectors.

Gradient Descent for Multiple Variables

The cost function requires just a slight change for the new regression model:

$$J(w)=\frac{1}{2m}\sum^m_{i=1}\Big(M_w(x^i)-y^i\Big)^2$$

and the gradient descent algorithm now becomes:

$$w_j=w_j-\alpha \frac{1}{m}\sum^m_{i=1}\Big(M_w(x^i)-y^i\Big)x^i$$

for j = 1 to n.

How Should We Start?

At this moment, we have already introduced the basic of linear regression in this and previous posts. The main concept is first find out the cost function to measure the error and second, to iteratively update weights \(w_j\) using gradient descent until the optimum is reached or other stopping criteria are met. You may start trying to test linear regression on your own dataset, and you may also immediately encounter two questions: how to define the initial values of weights, and what is the learning rate? We shall find out how should we start in the next article.

Avatar
Leo Mak
Enthusiast of Data Mining

Related