+

ML | Cost function in logistic regression

But for logistic regression,

This will result in a non-convex cost function. But this leads to a cost function with local optima, which is a very big problem for gradient descent to compute global optima.

So, for logistic regression, the cost function

If y = 1

Cost = 0 if y = 1, h θ (x) = 1
But,
h θ (x) - & gt; 0
Cost - & gt; Infinity

If y = 0

So,

To match the parameter θ , J (θ) must be minimized and this requires gradient descent.

Gradient descent — looks similar to linear regression, but the difference lies in the hypothesis h θ (x)

Get Solution for free from DataCamp guru