ML(머신러닝) :: Regularized Logistic Regression – Gradient descent & Advanced optimization

Contents in the post based on the free Coursera Machine Learning course, taught by Andrew Ng.

Previously, In the case of Linear regression, we learned about Gradient descent and Normal equation. For logistic regression, We would learn Gradient descent and Advanced optimization methods.

This time, We would learn how to adapt Gradient descent and more advanced optimization techniques in regularized logistic regression.

1. Gradient descent algorithm

1.1 Without Regularization

Cost function of Logistic regression

– Gradient descent

1.2 With Regularization

– Cost function for Logistic regression

– Gradient descent

It might look superficially the same with the equation for Regularized Linear regression. But don’t forget that the hypothesis is like below. Therefore it couldn’t be identical.

J(Θ) is cost function that adapts regularization.
J(Θ) is the cost function that adapts regularization.

e.g.)

 

2. Advanced optimization

Leave a Reply

Your email address will not be published. Required fields are marked *