スキップしてメイン コンテンツに移動

Implement kernel k-means

Introduction  

Today, I implement kernel k-means. The k-means algorithm is clustering algorithm. A reason that I implement kernel k-means algorithm is that I and my friend conceived introducing kernel to k-means. I investigated paper of kernel k-means. I found [This page](http://www.cs.utexas.edu/users/inderjit/public_papers/kdd_spectral_kernelkmeans.pdf)  Thus I could implement kernel k-means algorithm. I introduce the implementation of normal k-means and kernel k-means.


I handle the only implementation of kernel k-means. I will write the theory of kernel k-means. If I finished writing it, I publish on this post.

I finished. Theorem of K-means

Overview  


  • dataset  
  • a few explaining k-means
  •  k-means  
  •  kernel k-means  


Dataset  

I used two datasets. first data is designated for normal k-means. second data is designated for kernel k-means.


First data has consisted of three group and two-dimensional data, 300 samples.
The distribution is as follow.

Second data has consisted of two groups and two dimensional, 300 samples.
The distribution is as follow.

I publish a code of dataset.
THIS PAGE!!
A few explaining k-means
k-means algorithm computes mean vector in K class. second, k-means algorithm computes the distance between each data point and each mean vector. third, k-means algorithm choice as a new label of data point. How to choice is a mean vector in class K which minimize the distance between a mean vector and data point.
k-means  


Firstly, I implement normal k-means algorithm. I use first data to test my code. A Result of the test is complicated.


The centroid is mean vactor.


However, the k-means algorithm has weak points. You can understand by looking as follow.


This image is results that I use my k-means algorithm for second data.
Normal k-means depend on Euclid distance between the mean vector and data point in data space. Therefore I failed to cluster.

Kernel k-means
I failed to cluster in normal k-means.
However, I success clustering by using kernel trick.
Its result is as follow.



This clustering is complicated.
the kernel is the best way of non-linear clustering.


CODE
My code of kernel k-means algorithm is published in this page.

A git_Kmeans_def.py file is written function used in normal k-means.
A git_Kmeans_main.py file is the main file. This file is written if __name == '__main__':.

A git_kernel_Kemans_def.py file is written function used in kernel k-means.
A git_kernel_Kemans_main.py file is a main file. This file is written if __name__ == '__main__':

Reference  

http://www.cs.utexas.edu/users/inderjit/public_papers/kdd_spectral_kernelkmeans.pdf
https://sites.google.com/site/dataclusteringalgorithms/kernel-k-means-clustering-algorithm

コメント

このブログの人気の投稿

MAP estimation

Introduction 日本語 ver Today, I will explain MAP estimation(maximum a posteriori estimation). MAP estimation is used Bayes' thorem. If sample data is few, we can not belive value by Maximum likelihood estimation. Then, MAP estimation is enable to include our sense.  Overveiw Bayes' theorem MAP estimation Conjugate distribution Bayes' theorem  Bayes' theorem is $$P(A|B) = \frac{P(B|A)P(A)}{P(B)}$$ $P(A|B)$ is Probability when B occur. Please go on  http://takutori.blogspot.com/2018/04/bayes-theorem.html to know detail of Bayes' theorem. Map estimation Map estimation is used Bayes' theorem. Map estimation estimate parameter of population by maximuzing posterior probability. Now, suppoce we get data $x_1,x_2,...,x_n$ from population which have parameter $\theta$. Then, we want to $P(\theta|x_1,x_2,...,x_n)$. Here, we use Bayes' theorem. $$P(\theta|x_1,x_2,...,x_n) = \frac{P(x_1,x_2,...,x_n | \theta ) P(\theta)}{P(x_1,x_2,...,x_n)}...

Implementation of Robbins monro

Robbins monro の実装 sorry, this page is Japanese only.   今回はRobbins monro の実装をしてみました。 Robbins monroは確率勾配降下法の学習率を入りテーション回数の逆数で割っていくものです。 使っているprogram言語はpython 3です。osはwindowsです。(macほしい...) アルゴリズム 確率勾配降下方とは目的関数の最適解を求めるアルゴリズムです。目的関数をf(X)とすると、手順は以下のようになっています。 初期学習率$n_0$を決めます。訓練データDを用意します。この訓練データは複数の初期値の集まりです。 訓練データから一つ初期値をランダムに取り出し、これを$x_0$とし、最初の予測値とします。 次の式に現在の予測値$x_0$を代入し、新たな予測値$x_{n+1}$を得ます。$$x_{n+1} = x_{n} - \frac{n_0}{n} grad f(X_n)$$ 収束して入れば4へ、収束していなければ2で得られた値$x{n+1}$を新たに$x_n$としてもう一度2を行う。 訓練データを一周していなければ2へ、一周していれば各初期値から得られた解の中から目的関数を最も小さくするものを選ぶ。   実装例 以下の目的関数を最小化させてみましょう。 $$f(x,y) = (x-2)^2 + (y-3)^2 $$ コマンドラインでpythonを実行していきます。 予想通り、(2,3)という解を導き出してくれました。目的関数が簡単だったので、初期値をどの値でとってもばっちり正解にたどり着いてくれました。 CODE 以下にRobbins monroの関数だけ置いておきます。 こちら にすべてのコードを載せています。 def Robbins_monro(function,grad,number_variable_gradient): init_learning_rate = 1.5 stepsize = 1000 init_value = np.array([range(-1000,1020,20) for i in range(number_v...

dijkstra method

Introduction 日本語 ver Today, I will write about the dijkstra method. This method is algorithm which find the shortest distance. The map is expressed by graph. If you never see  this page , look at its page. This page explain the heap structure and definition of graph. The dijkstra method used heap structure, Because heap structure reduce the amout of calculation of dijkstra method. I use  this slide  to explain dijkstra. Overview Algorithm Implementation Algorithm This algorithm is  Decide start node, and this node named A. Allocate $d=\infty$ for each node, but d=0 for start node. Adjacent node of A named adj_list.  For adj in adj_list:  If d of adj > d of A + weight to adj -> d = A + weight to adj. Remove A from graph network. Find node which have the smallest d and it named A, and if network have node, back to 4. I explain this algorithm by drawing.  I explain algorithm by using this graph.  Fis...