2. Locally Weighted Linear Regression

Locally Weighted Linear Regression

This locally weighted linear regression function is a non-parametric Learning algorithm, where the size of h0(x) is linearly proportional to the size of our training set m. Thus memory sizes increase with the training set.

Finding a new algorithm that is easy to fit curved lines

  1. Look at the data at a small point that you’re interested in
  2. Build a local hypothesis just for that section and try to predict that area
  3. Given location X where we want to make a prediction,
    , where
  4. The weights depend on the particular point x at which we’re trying to evaluate x
    if |x(i) − x| is small, then w(i) is close to 1
    if |x(i) − x| is large, then w(i) is small (close to 0)
  5. So how do we determine the appropriate values of θ?
    We pick a θ that gives the highest weight based on training examples that are closest to the query point
  6. Bandwidth Parameter: The function is selected because we want a bell-shaped curve that peaks close to x and then falls of quickly after

     helps to identify the shape of the curve (fat vs thin)

Regular Normal Equation: W = (X^{T}X)^{-1}(X^{T}Y)
Normal: W = (X^{T}Wei X)^{-1}(X^{T}Wei Y)

Probabilistic interpretation of data

Where  is an error term which captures unmodeled effects or random noise
The density of the is given by

This implies that


the distribution of y(i) 

Likelihood function

Given the design matrix X which contains all the 

Maximum likelihood estimation

We should choose θ so as to make the data as high probability as possible.

We can maximize the log likelihood l(θ):

Maximizing  is the same as minimizing  , which is the cost function J(θ).


1. Least Mean Squares (LMS) algorithm

Standard Linear regression

Goal:  Given a training set, to learn a function h : X 􏰀→ Y so that h(x) is a “good” predictor for the corresponding value of y. Function h

Training example   ,  where
x(i) = input variable (our data)
y(i) = target variable (what we’re trying to predict)

Training set is a list of m training examples {();i = 1,…,m}

X = Y = R, where X denotes the space of input values, and Y the space of output values

  • Building the hypothesis function h(x)

To perform supervised learning, we must decide how we’re going to represent functions/hypotheses h in a computer. As an initial choice, let’s say we decide to approximate y as a linear function of x:

We let x0 = 1 (this is the intercept term), further simplifying to

  • Selecting the best parameters θ to predict y

We could have many parameters in our training set (ie. number of rooms, lot size, taxes). So which parameters do we include in our hypothesis function in order to best predict our outcome (y)? Ans:  Pick values of θ where the predicted value h(x) is close to the actual value y.

Cost Function: The following formula defines a function J of θ with an objective of seeking the minimal difference between h(x) and y.

This least mean squares function is a Parametric Learning algorithm, where even as our training set m approaches ∞ the size of h0(x) is constant. Thus memory sizes are constant.

How do we optimize J(θ)? Ans: We want to choose θ so as to minimize J(θ)

There are several ways to do this:

Gradient Descent
Normal Equations

  • Gradient descent algorithm: A search algorithm that starts with some “initial guess” for θ, and that repeatedly changes θ to make J(θ) smaller, until hopefully we converge to a value of θ that minimizes J(θ).

Where α is called the learning rate and is typically a small number

1. For a single training example:  Widrow-Hoff learning rule or LMS update rule.

 simplifies to , this results in the update rule

Characteristics:  The magnitude of the update is proportional to the error term , such that if hθ is close to y, the parameter changes by a small increment

Courtesy of Stanford SCPD – CS229  Machine Learning
Can be susceptible to local optima

For linear regression, the cost function J(θ) does not have a local optimum other than a local optimum.

2. For multiple training examples:

M1: Batch Gradient Descent 

The quantity in the summation in the update rule above is just ∂J(θ)/∂θj (for the original definition of J). So, this is simply gradient descent on the original cost function J.

Evaluation: Batch gradient descent has to scan through the entire training set before taking a single step

M2: Stochastic Gradient Descent (also incremental gradient descent)

Evaluation: We repeatedly run through the training set, and each time we encounter a training example, we update the parameters according to the gradient of the error with respect to that single training example only.

This method converges to the minimum more rapidly, but has the potential of overshooting the minimum and then oscillating around it. By slowly letting the learning rate α decrease to zero as the algorithm runs, it is also possible to ensure that the parameters will converge to the global minimum rather then merely oscillate around the minimum.

  • Normal Equation: The goal is to minimize J by explicitly taking its derivatives with respect to the θj’s, and setting them to zero (gradient == 0 or minima).

Matrix derivatives
For a function mapping from m x n matrices to real numbers, the gradient of f w.r.t. A is  ∇Af(A)

Trace is the sum of the diagonal entries for a square matrix

Properties of Traces

trAB = trBA
trABC = trCAB = trBCA

Additional rules

For (1), we could treat trace as inner product of two long vectors [A(1,1),,A(1,n),,A(n,1),,A(n,n)(by row) and [B(1,1),,B(n,1),,B(1,n),,B(n,n)(by column)

Thus, the trace is  , which is linear function of all elements of A.

Since only one term in the RHS numerator contains Ap, after applying the partial derivative, we conclude (Atr(AB))pBq⟹ Atr(ABB

Least Squares method (Applying Matrix rules)
We use a compressed vector notation forusing a design matrix X to depict
input values from our training example

*Proof  that   = 

  1. Define the design matrix X which contains the training example’s input values in rows
  2. Let ⃗y be the m-dimensional vector containing all the target values from the training set 
  3.  = 
  4. Since  aka the Hypothesis is equivalent to ,
    from the linear equation
    *Note the transpose  is simply a way of expressing x in rows
  5.  = 
  6. The inner product of a vector by itself is equivalent to the sum of the squares of its elements …
  7. With a simple transformation,

Combining Equations (2) and (3), 

Given  = 


We set this derivative == 0 and solve for θ

Pseudo Inverse is


Value function approximation

Linear quadratic regulation (LQR)

Value iteration and policy iteration