スキップしてメイン コンテンツに移動

Maximum likelihood extimation

Introduction


Today, I will write about the Maximum likelihood estimation. This is basically the Statistics estimation. I want to explain an example of Maximum likelihood estimation. Firstly, I will explain likelihood. Secondly, I will likelihood function. Thirdly, I will explain the Maximum likelihood estimation.

Overview


  • likelihood
  • Maximum likelihood estimation
  • the problem of Maximum likelihood estimation


likelihood

Let we get the observation data by a precondition.
When we estimate precondition by an observation data, the likelihood is a plausible value which indicated that its estimation is correct.

Maybe, you can not understand this meaning. Also, I could not understand.
I give you an example of likelihood.

I throw a coin. this coin land heads up by probability P, and lands head on the reverse by probability 1-P.
For example, when I throw 100 times a coin, all trial is head. Then, we estimate that probability P is 1.0.

If let P=0.5, Probability that the coin lands 100 times heads is $0.5^{100} = 7.88860e-31$. this is likelihood when let P=0.5.

If let P=0.99, Provability that the coin lands 100 times heads is $0.99^{100} = 0.3666....$. this is likelihood when let P=0.99.

when a phenomenon is fixed that the coin lands 100 times heads, P(100 times heads|P) is called likelihood function of variable P.

At result, the likelihood is that P(A|B = b) when A is fixed and it hold B=b.

We regard maximizing likelihood as reasonable value b.

For example, I use an earlier example.

when letting P=0.5, tje likelihood is 7.88860e-31. when letting P=0.99, likelihood is 0.3666.
Thus, we think that it is natural for our to regard P=0.99.

Thus, P=0.99 is optimum than P=0.5.

Maximum likelihood estimation

The Maximum likelihood estimation is a method that we estimate a parameter of the probability distribution from getting observation data.

The maximum likelihood estimation maximizes all likelihood.

Let probability distirbution function is $f$ and $X_1,X_2,...,X_n is specimen such that $$X_1,X_2, ..., X_n \sim f$.

Then, Probability that we get $X_1,X_2,..,X_n$ from $f$ is
$$\Pi_{i=1}^{N} P(X_i)$$
, because we have to think joint probability.

Thus, I define
$$L(\theta) = f(x_1,x_2,...,x_n|\theta)$$ called likelihood function.

Then,
$$\theta^{\star} \in \arg_{\theta} \max L(\theta)$$
$\theta$ is called maximum likelihood estimator,

and,
$$\frac{\partial}{\partial \theta} \log L(\theta)$$
is called likelihood equation.

I explain the reason that I use $\log$ next example of maximum likelihood estimation.

Example

I think about $x_1,x_2,...,x_n \in {0,1}$. $\forall i \in {1,2,..,n}$, If $x_i = 1$, the coin lands head i'th time. if $x_i$, the coin lands tail i'th time.

Then, likehood function is
$$L(\theta) = P(x_1,x_2,...,x_n|\theta) = \Pi_{i=1}^{n} \theta^{x_i} (1-\theta)^{1-x_i}$$
, because $\forall i \in {1,2,..,n},  \sim p(k;\theta) = \theta^k (1-\theta)^{1-k} ~~~~\textrm{for} k \in {0,1}$
here, $\theta$ is probability that the coin lands head.

I maximize $L(\theta)$ about $\theta$, but it is difficult to differentiating, because $L(\theta)$ is expressed multiplication.

I solve this problem.
Its method is $\log$ function.
$\log$ function is monotonically increasing function, thus it is consistented optimal solution of $L(\theta)$ and $\log L(\theta)$.

Thus, I think maximizing $\log L(\theta)$.

\begin{eqnarray*}
\log L(\theta) &=& \log \Pi_{i=1}^{n} \theta^{x_i} (1-\theta)^{1-x_i} \\
&=& \sum_{i=1}^N \log \theta^{x_i} + \log (1-\theta)^{1-x_i} \\
&=& \sum_{i=1}^N  x_i \log \theta + (1-x_i)\log(1-\theta)
\end{eqnarray*}

Partial of this is

\begin{eqnarray*}
\frac{\partial}{\partial \theta} \log L(\theta) &=& 0 \\
\frac{\partial}{\partial \theta} \sum_{i=1}^N x_i \log \theta + (1-x_i) \log (1-\theta) &=& 0 \\
\sum_{i=1}^N \frac{x_i}{\theta} - \frac{1-x_i}{1-\theta} &=& 0 \\
\frac{1}{\theta} \sum_{i=1}^N x_i - \frac{1}{1-\theta_i} \sum_{i=1}^N (1-x_i) &=& 0 \\
(1-\theta) \sum_{i=1}^N x_i - \theta \sum_{i=1}^N 1-x_i &=& 0 \\
\sum_{i=1}^N x_i - \theta \sum_{i=1}^N x_i - \theta \sum_{i=1}^N 1 + \theta \sum_{i=1}^N x_i &=& 0 \\
\sum_{i=1}^N x_i - n \theta &=& 0 \\
\theta &=& \frac{\sum_{i=1}^N }{n} \\
\end{eqnarray*}


This optimum is mean of $x_1,x_2,..,x_n$.
If you get the phenomenon that head is 100 times and tail is 0 times.
Then $\theta = 1$

If you get the phenomenon that head is 50 times and tail is 50 times.
Then $\theta = 0.5$


Problem of Maximum likelihood estimation

For example,
If you get the phenomenon that head is 100 times and tail is 0 times, then $\theta = 1$,
but if you get the phenomenon that is 3 times and tail 0 times, then $\theta = 1$.

However, it is dangerous to judge $\theta = 1$ by 3 times implement.

It is problem, Thus maximum likelihood estimation is dangerous when a number of the trial is few.

Reference

https://ja.wikipedia.org/wiki/%E5%B0%A4%E5%BA%A6%E9%96%A2%E6%95%B0

コメント

このブログの人気の投稿

Implementation of Robbins monro

Robbins monro の実装 sorry, this page is Japanese only.   今回はRobbins monro の実装をしてみました。 Robbins monroは確率勾配降下法の学習率を入りテーション回数の逆数で割っていくものです。 使っているprogram言語はpython 3です。osはwindowsです。(macほしい...) アルゴリズム 確率勾配降下方とは目的関数の最適解を求めるアルゴリズムです。目的関数をf(X)とすると、手順は以下のようになっています。 初期学習率$n_0$を決めます。訓練データDを用意します。この訓練データは複数の初期値の集まりです。 訓練データから一つ初期値をランダムに取り出し、これを$x_0$とし、最初の予測値とします。 次の式に現在の予測値$x_0$を代入し、新たな予測値$x_{n+1}$を得ます。$$x_{n+1} = x_{n} - \frac{n_0}{n} grad f(X_n)$$ 収束して入れば4へ、収束していなければ2で得られた値$x{n+1}$を新たに$x_n$としてもう一度2を行う。 訓練データを一周していなければ2へ、一周していれば各初期値から得られた解の中から目的関数を最も小さくするものを選ぶ。   実装例 以下の目的関数を最小化させてみましょう。 $$f(x,y) = (x-2)^2 + (y-3)^2 $$ コマンドラインでpythonを実行していきます。 予想通り、(2,3)という解を導き出してくれました。目的関数が簡単だったので、初期値をどの値でとってもばっちり正解にたどり着いてくれました。 CODE 以下にRobbins monroの関数だけ置いておきます。 こちら にすべてのコードを載せています。 def Robbins_monro(function,grad,number_variable_gradient): init_learning_rate = 1.5 stepsize = 1000 init_value = np.array([range(-1000,1020,20) for i in range(number_v...

MAP estimation

Introduction 日本語 ver Today, I will explain MAP estimation(maximum a posteriori estimation). MAP estimation is used Bayes' thorem. If sample data is few, we can not belive value by Maximum likelihood estimation. Then, MAP estimation is enable to include our sense.  Overveiw Bayes' theorem MAP estimation Conjugate distribution Bayes' theorem  Bayes' theorem is $$P(A|B) = \frac{P(B|A)P(A)}{P(B)}$$ $P(A|B)$ is Probability when B occur. Please go on  http://takutori.blogspot.com/2018/04/bayes-theorem.html to know detail of Bayes' theorem. Map estimation Map estimation is used Bayes' theorem. Map estimation estimate parameter of population by maximuzing posterior probability. Now, suppoce we get data $x_1,x_2,...,x_n$ from population which have parameter $\theta$. Then, we want to $P(\theta|x_1,x_2,...,x_n)$. Here, we use Bayes' theorem. $$P(\theta|x_1,x_2,...,x_n) = \frac{P(x_1,x_2,...,x_n | \theta ) P(\theta)}{P(x_1,x_2,...,x_n)}...

dijkstra method

Introduction 日本語 ver Today, I will write about the dijkstra method. This method is algorithm which find the shortest distance. The map is expressed by graph. If you never see  this page , look at its page. This page explain the heap structure and definition of graph. The dijkstra method used heap structure, Because heap structure reduce the amout of calculation of dijkstra method. I use  this slide  to explain dijkstra. Overview Algorithm Implementation Algorithm This algorithm is  Decide start node, and this node named A. Allocate $d=\infty$ for each node, but d=0 for start node. Adjacent node of A named adj_list.  For adj in adj_list:  If d of adj > d of A + weight to adj -> d = A + weight to adj. Remove A from graph network. Find node which have the smallest d and it named A, and if network have node, back to 4. I explain this algorithm by drawing.  I explain algorithm by using this graph.  Fis...