Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
262 views
in Technique[技术] by (71.8m points)

python - Gradient descent for ridge regression

I'm trying to write a code that return the parameters for ridge regression using gradient descent. Ridge regression is defined as

enter image description here

Where, L is the loss (or cost) function. w are the parameters of the loss function (which assimilates b). x are the data points. y are the labels for each vector x. lambda is a regularization constant. b is the intercept parameter (which is assimilated into w). So, L(w,b) = number

The gradient descent algorithm that I should implement looks like this:

enter image description here

Where ? is the gradient of L with respect to w. η

is a step size. t is the time or iteration counter.

enter image description here

My code:

def ridge_regression_GD(x,y,C):
    x=np.insert(x,0,1,axis=1) # adding a feature 1 to x at beggining nxd+1
    w=np.zeros(len(x[0,:])) # d+1
    t=0
    eta=1
    summ = np.zeros(1)
    grad = np.zeros(1)
    losses = np.array([0])
    loss_stry = 0
    while eta > 2**-30:
        for i in range(0,len(y)): # here we calculate the summation for all rows for loss and gradient
            summ=summ+((y[i,]-np.dot(w,x[i,]))*x[i,])
            loss_stry=loss_stry+((y[i,]-np.dot(w,x[i,]))**2)
        losses=np.insert(losses,len(losses),loss_stry+(C*np.dot(w,w)))
        grad=((-2)*summ)+(np.dot((2*C),w))
        eta=eta/2
        w=w-(eta*grad)
        t+=1
        summ = np.zeros(1)
        loss_stry = 0
    b=w[0]
    w=w[1:]
    return w,b,losses

The output should be the intercept parameter b, the vector w and the loss in each iteration, losses.

My problem is that when I run the code I get increasing values for w and for the losses, both in the order of 10^13.

Would really appreciate if you could help me out. If you need any more information or clarification just ask for it.

NOTE: This post was deleted from Cross Validated forum. If there's a better forum to post it please let me know.

question from:https://stackoverflow.com/questions/65909753/gradient-descent-for-ridge-regression

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

After I check your code, turns out your implementation of Ridge regression is correct, the problem of increasing values for w which led to increasing losses you get is due to extreme and unstable update value of parameters (i.e abs(eta*grad) is too big), so I adjust the learning rate and weights decay rate to appropriate range and change the way you decay the learning rate then everything work as expected:

import numpy as np

sample_num = 100
x_dim = 10
x = np.random.rand(sample_num, x_dim)
w_tar = np.random.rand(x_dim)
b_tar = np.random.rand(1)[0]
y = np.matmul(x, np.transpose([w_tar])) + b_tar
C = 1e-6

def ridge_regression_GD(x,y,C):
    x = np.insert(x,0,1,axis=1) # adding a feature 1 to x at beggining nxd+1
    x_len = len(x[0,:])
    w = np.zeros(x_len) # d+1
    t = 0
    eta = 3e-3
    summ = np.zeros(x_len)
    grad = np.zeros(x_len)
    losses = np.array([0])
    loss_stry = 0

    for i in range(50):
        for i in range(len(y)): # here we calculate the summation for all rows for loss and gradient
            summ = summ + (y[i,] - np.dot(w, x[i,])) * x[i,]
            loss_stry += (y[i,] - np.dot(w, x[i,]))**2
            
        losses = np.insert(losses, len(losses), loss_stry + C * np.dot(w, w))
        grad = -2 * summ + np.dot(2 * C,w)
        w -= eta * grad

        eta *= 0.9
        t += 1
        summ = np.zeros(1)
        loss_stry = 0

    return w[1:], w[0], losses

w, b, losses = ridge_regression_GD(x, y, C)
print("losses: ", losses)
print("b: ", b)
print("b_tar: ", b_tar)
print("w: ", w)
print("w_tar", w_tar)

x_pre = np.random.rand(3, x_dim)
y_tar = np.matmul(x_pre, np.transpose([w_tar])) + b_tar
y_pre = np.matmul(x_pre, np.transpose([w])) + b
print("y_pre: ", y_pre)
print("y_tar: ", y_tar)

Outputs:

losses: [   0 1888 2450 2098 1128  354   59    5    1    1    1    1    1    1
    1    1    1    1    1    1    1    1    1    1    1    1    1    1
    1    1    1    1    1    1    1    1    1    1    1    1    1    1
    1    1    1    1    1    1    1    1    1]
b:  1.170527138363387
b_tar:  0.894306608050021
w:  [0.7625987  0.6027163  0.58350218 0.49854847 0.52451963 0.59963663
 0.65156702 0.61188389 0.74257133 0.67164963]
w_tar [0.82757802 0.76593551 0.74074476 0.37049698 0.40177269 0.60734677
 0.72304859 0.65733725 0.91989305 0.79020028]
y_pre:  [[3.44989377]
 [4.77838804]
 [3.53541958]]
y_tar:  [[3.32865041]
 [4.74528037]
 [3.42093559]]

As you can see from losses change at outputs, the learning rate eta = 3e-3 is still bit two much, so the loss will go up at first few training episode, but start to drop when learning rate decay to appropriate value.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...