【matlab】stanford线性回归,logistic regression 实验
2013-04-21 12:41 Loull 阅读(566) 评论(0) 编辑 收藏 举报1、找到衡量误差的函数costFunction
2、拟合参数theta,使costFunction最小。用梯度下降,迭代n次,迭代更新theta,让costFunction减小
3、找到了合适的参数theta,进行预测
一、linear regression
computeCost:
for i=1:m h = X(i,:) * theta; J = J + (h - y(i))^2; end J = J / (2*m);
梯度下降过程,拟合参数theta
for iter = 1:num_iters sum = zeros(size(theta,1),1); for j = 1:size(theta,1) for i = 1:m h = X(i,:) * theta; sum(j) = sum(j) + (h - y(i))*X(i,j); end % theta(j) = theta(j) - alpha * sum / m; %go wrong! simultaneously update theta end theta = theta - sum .* alpha ./ m; % Save the cost J in every iteration J_history(iter) = computeCostMulti(X, y, theta); end
二、Logistic Regression
costFunction
function [J, grad] = costFunctionReg(theta, X, y, lambda) %COSTFUNCTIONREG Compute cost and gradient for logistic regression with regularization % J = COSTFUNCTIONREG(theta, X, y, lambda) computes the cost of using % theta as the parameter for regularized logistic regression and the % gradient of the cost w.r.t. to the parameters. % Initialize some useful values m = length(y); % number of training examples % You need to return the following variables correctly J = 0; grad = zeros(size(theta)); for i=1:m J = J - y(i)*log(h_fun(X(i,:), theta)) - (1-y(i))*log(1-h_fun(X(i,:),theta)); end J = J / m; reg = 0; for j=2:size(theta) reg = reg + theta(j)^2; end reg = reg * lambda /(2*m); J = J + reg; for i=1:m grad(1) = grad(1) + (h_fun(X(i,:),theta) - y(i))*X(i,1); end grad(1) = grad(1) / m; for j=2:size(theta) for i=1:m grad(j) = grad(j) + (h_fun(X(i,:),theta) - y(i)) * X(i,j) + lambda*theta(j)/m; end grad(j) = grad(j) / m; end end
参数拟合
% Initialize fitting parameters initial_theta = zeros(size(X, 2), 1); % Set regularization parameter lambda to 1 (you should vary this) lambda = 0; % Set Options options = optimset(\'GradObj\', \'on\', \'MaxIter\', 400); % Optimize [theta, J, exit_flag] = ... fminunc(@(t)(costFunctionReg(t, X, y, lambda)), initial_theta, options);