Statistics for Machine Learning
上QQ阅读APP看书,第一时间看更新

Maximum likelihood estimation

Logistic regression works on the principle of maximum likelihood estimation; here, we will explain in detail what it is in principle so that we can cover some more fundamentals of logistic regression in the following sections. Maximum likelihood estimation is a method of estimating the parameters of a model given observations, by finding the parameter values that maximize the likelihood of making the observations, this means finding parameters that maximize the probability p of event 1 and (1-p) of non-event 0, as you know:

probability (event + non-event) = 1

Example: Sample (0, 1, 0, 0, 1, 0) is drawn from binomial distribution. What is the maximum likelihood estimate of µ?

Solution: Given the fact that for binomial distribution P(X=1) = µ and P(X=0) = 1- µ where µ is the parameter:

Here, log is applied to both sides of the equation for mathematical convenience; also, maximizing likelihood is the same as the maximizing log of likelihood:

Determining the maximum value of µ by equating derivative to zero:

However, we need to do double differentiation to determine the saddle point obtained from equating derivative to zero is maximum or minimum. If the µ value is maximum; double differentiation of log(L(µ)) should be a negative value:

Even without substitution of µ value in double differentiation, we can determine that it is a negative value, as denominator values are squared and it has a negative sign against both terms. Nonetheless, we are substituting and the value is:

Hence it has been proven that at value µ = 1/3, it is maximizing the likelihood. If we substitute the value in the log likelihood function, we will obtain:

The reason behind calculating -2*ln(L) is to replicate the metric calculated in proper logistic regression. In fact:

AIC = -2*ln(L) + 2*k

So, logistic regression tries to find the parameters by maximizing the likelihood with respect to individual parameters. But one small difference is, in logistic regression, Bernoulli distribution will be utilized rather than binomial. To be precise, Bernoulli is just a special case of the binomial, as the primary outcome is only two categories from which all the trails are made.