Introduction
Today, I will write about Logistic regression. Logistic regression is the basis of Machine Learning. Logistic regression is the model to classify two value
Overview
This post is written by using PRML for reference.Optimization is used for Iterative reweighted least squares method.
- First, I will introduce the sigmoid function
- Second, I will define probability to classify
- Third, I will write cross-entropy error function
- Fourth, I will explain IRLS
sigmoid function is following.
\[\sigma(a) = \frac{1}{1+\exp(a)}\]
I will compute differential of this function.
\begin{eqnarray*} \frac{d}{d a} \frac{1}{1+\exp(a)} &=& \frac{\exp(-a)}{(1+\exp(-a))^2} \\ &=& \frac{1}{1+\exp(-a)} \frac{\exp(-a)}{1+\exp(-a)}\\ &=& \frac{1}{1+\exp(-a)} \{ \frac{1+\exp(-a)}{1+\exp(-a)} - \frac{1}{1+\exp(-a)} \} \\ &=& \sigma(a)(1-\sigma(a)) \end{eqnarray*}
This function is very important in terms in terms of Machine Learning.
Because Sigmoid function has the following form.
The sigmoid function has the following characteristic.
- Sigmoid function is defined on \((-\infty,\infty)\)
- and, range of y is (0,1).
- The sigmoid function is monotonic increase function.
- Sigmoid function is point-symmetry in (0,0.5)
Probability of classifying
First, We think of probability that data is properly classified.
This line is a hyperplane.
a hyperplane is expressed as follows.
\[w^T x= 0\]
w is a normal vector of the hyperplane.
- Data point exist red domain when \(w^T x > 0\)
- Data point exist blue domain when \(w^T x < 0\)
If data exists in the blue domain, the data is belonging \(C_2\)
\(C_1\) have 1 as a label. \(C_2\) have 0 as a label.
We can have confidence that I know the class of data exist far from the hyperplane, but we do not know the class of data when data point exists near hyperplane.
We want not to believe the class of data predicted near hyper plane.
Therefore!!, I use disposition of Sigmoid function. I change the distance from
the hyperplane to data point into probability.
- I handle that data point exist red domain\((C_1)\) when the probability is higher than 0.5.
- When probability has just 0.5, I do not know the class of data. (The data exist on the hyperplane)
- I handle that data point exist blue domain \((C_2)\) when the probability is lower than 0.5.
- The farther the distance between a data point and the hyperplane is the farther keep probability away from 0.5.
- The nearer the distance between a data point and the hyperplane is, the nearer probability is 0.5.
- When the probability is near 1, I have confidence that class of the data is \(C_1\)
- When the probability is near 0, I have confidence that class of the data is \(C_2\)
Sigmoid function fulfill these way of thinking.
Thus, I handle probability value which output of sigmoid function by input distance between datapoint and hyperplane.
Cross-entropy error function
when Data set is \(\{\phi_n,t_n\}\), I define probability by using Sigmoid function.here, \(t_n \in {0,1}\)
I define likelihood function as follows.
\[p(t|w) = \Pi_{n=1}^{N} y_{n}^{t_n} \{1-y_n\}^{1-t_n}\]
here,\[y_n = \sigma(w^T \phi_n) \]
\[t = (t_1,t_2,,,t_n)^T\]
\(w^T x\)is distance between \(\phi\) and hyper plane.
y_n is probability that \(\phi_n\) is \(C_1\)
I get negative logarithm about likelihood function.
\[E(w) = -\log p(t|w) = - \sum_{n=1}^{N} \{t_n \log(y_n) + (1-t_n ) \log(1-y_n)\}\]
\(E(w)\) is called Cross-entropy error function
IRLS
IRLS is iterative reweighted least squares method. This method estimate w by using the Newton-Raphson method.I get gradient of E(w).
\begin{eqnarray*} \nabla E(w) &=& \nabla - \sum_{n=1}^{N} \{t_n \log(y_n) + (1-t_n ) \log(1-y_n)\}\\ &=& \nabla - \sum_{n=1}^{N} \{t_n \log(\sigma(w^T \phi_n) + (1-t_n ) \log(1-\sigma(w^T \phi_n)\}\\ &=&-\sum_{n=1}^{N} \{ \frac{t_n \sigma(w^T \phi_n)(1-\sigma(w^T \phi_n))}{\sigma(w^T \phi_n)}\phi_n - \frac{(1-t_n) \sigma(w^T \phi_n)(1-\sigma(w^T \phi_n))}{1-\sigma(w^T \phi_n)}\phi_n \}\\ &=& -\sum_{n=1}^{N} \frac{t_n (1-\sigma(w^T \phi_n))-(1-t_n)\sigma(w^T \phi_n)}{\sigma(w^T \phi_n) (1-\sigma(w^T \phi_n))} \{\sigma(w^T \phi_n) (1-\sigma(w^T \phi_n))\} \phi_n \\ &=& -\sum_{n=1}^{N} \{t_n (1-\sigma(w^T \phi_n))-(1-t_n)\sigma(w^T \phi_n)\} \phi_n \\ &=& -\sum_{n=1}^{N} \{t_n - t_n \sigma(w^T \phi_n) - \sigma(w^T \phi_n) - \sigma(w^T \phi_n) + t_n \sigma(w^T \phi)\} \phi_n\\ &=&\sum_{n=1}^{N} \{\sigma(w^T \phi_n) - t_n \} \phi_n\\ &=& \sum_{n=1}^{N} (y_n - t_n) \phi_n\\ &=& \phi^T (y - t)\\ \end{eqnarray*}
I get Hessian matrix of E(w)
\begin{eqnarray*} H &=& \nabla \nabla E(w) \\ &=& \nabla \phi^T(y-t) \\ &=& \nabla \sum_{n=1}^{N} \phi_{n}^T (y_n-t_n) \\ &=& \nabla\sum_{n=1}^{N} (y_n-t_n) \phi_{n}^T \\ &=&\sum_{n=1}^{N} y_n(1-y_n) \phi_n \phi^T = \phi^T R \phi \end{eqnarray*}
here, R is daiagonal matrix and have \(y_n(1-y_n)\) element of (n,n)
I use These form to estimate by Newton-Raphson method.
\begin{eqnarray*} w_{new} &=& w_{old} - \{ \phi^ T R \phi \}^ {-1} \phi^T (y-t) \\ &=& \{ \phi^ T R \phi \}^ T \{ \phi^ T R \phi w_{old} - \phi^T(y-t)\} \\ &=& \{ \phi^ T R \phi \} ^ {-1} \phi^ T Rz \end{eqnarray*}
Thus, w is updated by this form.
This post is Theory ed of Logistic Regression.
Next, I implement Logistic Regression.
I am glad to see my next post.
*I wrote the implementation of Logistic Regression.
Implementation of Logistic Regression
コメント
コメントを投稿