svm介绍

SVM 是一个非常优雅的算法,具有完善的数学理论,虽然如今工业界用到的不多,但还是决定花点时间去写篇文章整理一下。

1. 支持向量

1.1 线性可分

首先我们先来了解下什么是线性可分。

 

v2-a75409cca671ad0819cd28ff9f40a01b_1440w.jpg?useWebp=true

 

在二维空间上,两类点被一条直线完全分开叫做线性可分。

严格的数学定义是:

equation?tex=D_0 和 equation?tex=D_1 是 n 维欧氏空间中的两个点集。如果存在 n 维向量 w 和实数 b,使得所有属于 equation?tex=D_0 的点 equation?tex=x_i 都有 equation?tex=wx_i+%2B+b+%3E+0 ,而对于所有属于 equation?tex=D_1 的点 equation?tex=x_j 则有 equation?tex=wx_j+%2B+b+%3C+0 ,则我们称 equation?tex=D_0 和 equation?tex=D_1 线性可分。

1.2 最大间隔超平面

从二维扩展到多维空间中时,将 equation?tex=D_0 和 equation?tex=D_1 完全正确地划分开的 equation?tex=wx%2Bb%3D0 就成了一个超平面。

为了使这个超平面更具鲁棒性,我们会去找最佳超平面,以最大间隔把两类样本分开的超平面,也称之为最大间隔超平面。

  • 两类样本分别分割在该超平面的两侧;
  • 两侧距离超平面最近的样本点到超平面的距离被最大化了。

1.3 支持向量

v2-0f1ccaf844905148b7e75cab0d0ee2e3_1440w.jpg?useWebp=true

样本中距离超平面最近的一些点,这些点叫做支持向量。

1.4 SVM 最优化问题

SVM 想要的就是找到各类样本点到超平面的距离最远,也就是找到最大间隔超平面。任意超平面可以用下面这个线性方程来描述:

equation?tex=w%5ETx%2Bb%3D0+%5C%5C

二维空间点 equation?tex=%28x%2Cy%29 到直线 equation?tex=Ax%2BBy%2BC%3D0 的距离公式是:

equation?tex=%5Cfrac%7B%7CAx%2BBy%2BC%7C%7D%7B%5Csqrt%7BA%5E2%2BB%5E2%7D%7D+%5C%5C

扩展到 n 维空间后,点 equation?tex=x%3D%28x_1%2Cx_2%E2%80%A6x_n%29 到直线 equation?tex=w%5ETx%2Bb%3D0 的距离为:

equation?tex=%5Cfrac%7B%7Cw%5ETx%2Bb%7C%7D%7B%7C%7Cw%7C%7C%7D+%5C%5C

其中 equation?tex=%7C%7Cw%7C%7C%3D%5Csqrt%7Bw_1%5E2%2B%E2%80%A6w_n%5E2%7D 。

如图所示,根据支持向量的定义我们知道,支持向量到超平面的距离为 d,其他点到超平面的距离大于 d。

v2-1510522df255bd2987bf9ce8541f45af_1440w.jpg?useWebp=true

于是我们有这样的一个公式:

equation?tex=%5Cleft%5C%7B+%5Cbegin%7Baligned%7D+%5Cfrac%7Bw%5ETx%2Bb%7D%7B%7C%7Cw%7C%7C%7D+%26%5Cgeq+d+%5Cquad++y%3D1+%5C%5C+%5Cfrac%7Bw%5ETx%2Bb%7D%7B%7C%7Cw%7C%7C%7D+%26%5Cleq+-d++%5Cquad+y%3D-1++%5Cend%7Baligned%7D+%5Cright.+%5C%5C

稍作转化可以得到:

equation?tex=%5Cleft%5C%7B+%5Cbegin%7Baligned%7D+%5Cfrac%7Bw%5ETx%2Bb%7D%7B%7C%7Cw%7C%7Cd%7D+%26%5Cgeq+1+%5Cquad++y%3D1+%5C%5C+%5Cfrac%7Bw%5ETx%2Bb%7D%7B%7C%7Cw%7C%7Cd%7D+%26%5Cleq+-1++%5Cquad+y%3D-1++%5Cend%7Baligned%7D+%5Cright.+%5C%5C

equation?tex=%7C%7Cw%7C%7C+d 是正数,我们暂且令它为 1(之所以令它等于 1,是为了方便推导和优化,且这样做对目标函数的优化没有影响),故:

equation?tex=%5Cleft%5C%7B+%5Cbegin%7Baligned%7D+w%5ETx%2Bb+%26%5Cgeq+1+%5Cquad++y%3D1+%5C%5C+w%5ETx%2Bb+%26%5Cleq+-1++%5Cquad+y%3D-1++%5Cend%7Baligned%7D+%5Cright.+%5C%5C

将两个方程合并,我们可以简写为:

equation?tex=y%28w%5ETx%2Bb%29+%5Cgeq+1+%5C%5C

至此我们就可以得到最大间隔超平面的上下两个超平面:

v2-2db485616d1a5429c28d8e97dc653c8f_1440w.jpg?useWebp=true

每个支持向量到超平面的距离可以写为:

equation?tex=d%3D%5Cfrac%7B%7Cw%5ETx%2Bb%7C%7D%7B%7C%7Cw%7C%7C%7D+%5C%5C

由上述 equation?tex=y%28w%5ETx%2Bb%29++%3E+1+%3E+0 可以得到 equation?tex=y%28w%5ETx%2Bb%29++%3D+%7Cw%5ETx%2Bb%7C ,所以我们得到:

equation?tex=d+%3D++%5Cfrac%7By%28w%5ETx%2Bb%29%7D%7B%7C%7Cw%7C%7C%7D++%5C%5C+++

最大化这个距离:

equation?tex=%5Cmax+2%2A+%5Cfrac%7By%28w%5ETx%2Bb%29%7D%7B%7C%7Cw%7C%7C%7D++%5C%5C+++

这里乘上 2 倍也是为了后面推导,对目标函数没有影响。刚刚我们得到支持向量 equation?tex=y%28w%5ETx%2Bb%29+%3D+1+ ,所以我们得到:

equation?tex=%5Cmax+%5Cfrac%7B2%7D%7B%7C%7Cw%7C%7C%7D+%5C%5C

再做一个转换:

equation?tex=%5Cmin+%5Cfrac%7B1%7D%7B2%7D%7C%7Cw%7C%7C+%5C%5C

为了方便计算(去除 equation?tex=%7C%7Cw%7C%7C 的根号),我们有:

equation?tex=%5Cmin+%5Cfrac%7B1%7D%7B2%7D%7C%7Cw%7C%7C%5E2%5C%5C

所以得到的最优化问题是:

equation?tex=%5Cmin+%5Cfrac%7B1%7D%7B2%7D+%7C%7Cw%7C%7C%5E2+%5C+s.t.+%5Cquad+y_i%EF%BC%88w%5ETx_i%2Bb%EF%BC%89%5Cgeq+1+%5C%5C

2. 对偶问题

2.1 拉格朗日乘数法

2.1.1 等式约束优化问题

本科高等数学学的拉格朗日程数法是等式约束优化问题:

equation?tex=%5Cmin+f%28x_%7B1%7D+%2Cx_%7B2%7D+%2C...%2Cx_%7Bn%7D+%29+%5C%5C+s.t.+%5Cquad+h_%7Bk%7D+%28x_%7B1%7D+%2Cx_%7B2%7D+%2C...%2Cx_%7Bn%7D+%29%3D0+%5Cquad+k+%3D1%2C2%2C...%2Cl%5C%5C

我们令 equation?tex=L%28x%2C%5Clambda+%29+%3D+f%28x%29+%2B+%5Csum%5Climits_%7Bk+%3D+1%7D%5El+%5Clambda+_k+h_k+%28x%29 ,函数 equation?tex=L%28x%2Cy%29 称为 Lagrange 函数,参数 equation?tex=%5Clambda 称为 Lagrange 乘子没有非负要求

利用必要条件找到可能的极值点:

equation?tex=%5Cleft%5C%7B+%5Cbegin%7Baligned%7D++%5Cfrac%7B%5Cpartial+L%7D%7B%5Cpartial+x_i%7D+%3D+0+%5Cquad+i%3D1%2C2%2C...%2Cn+%5C%5C+%5Cfrac%7B%5Cpartial+L%7D%7B%5Cpartial+%5Clambda_k%7D+%3D+0+%5Cquad+k%3D1%2C2%2C...%2Cl++%5Cend%7Baligned%7D+%5Cright.+%5C%5C

具体是否为极值点需根据问题本身的具体情况检验。这个方程组称为等式约束的极值必要条件。

等式约束下的 Lagrange 乘数法引入了 equation?tex=l 个 Lagrange 乘子,我们将 equation?tex=x_%7Bi%7D 与 equation?tex=%5Clambda_%7Bk%7D 一视同仁,把 equation?tex=%5Clambda_%7Bk%7D+ 也看作优化变量,共有 equation?tex=%28n%2Bl%29 个优化变量。

2.1.2 不等式约束优化问题

而我们现在面对的是不等式优化问题,针对这种情况其主要思想是将不等式约束条件转变为等式约束条件,引入松弛变量,将松弛变量也是为优化变量。

v2-f6cb5ad280ba0cff366c45e2ac12c144_1440w.jpg?useWebp=true

以我们的例子为例:

equation?tex=min+f%28w%29+%3D+min%5Cfrac%7B1%7D%7B2%7D+%7C%7Cw%7C%7C%5E2+%5C%5C+s.t.+%5Cquad+g_i%28w%29+%3D+1+-+y_i%28w%5ETx_i%2Bb%29%5Cleq+0+%5C%5C

我们引入松弛变量 equation?tex=a_i%5E2 得到 equation?tex=h_i%28w%2Ca_i%29+%3D+g_i%28w%29+%2B+a_i%5E2+%3D+0 。这里加平方主要为了不再引入新的约束条件,如果只引入 equation?tex=a_i 那我们必须要保证 equation?tex=a_i+%5Cgeq+0 才能保证 equation?tex=h_i%28w%2Ca_i%29+%3D+0 ,这不符合我们的意愿。

由此我们将不等式约束转化为了等式约束,并得到 Lagrange 函数:

equation?tex=%5Cbegin%7Baligned%7D++L%28w%2C%5Clambda%2Ca%29+%26%3D+%7Bf%28w%29%7D+%2B+%5Csum%5Climits_%7Bi+%3D+1%7D%5En+%5Clambda_i+h_i+%28w%29+%5C%5C+%26%3D+%7Bf%28w%29%7D+%2B+%5Csum%5Climits_%7Bi+%3D+1%7D%5En+%5Clambda_i+%5Bg_i%28w%29+%2B+a_i%5E2%5D+%5Cquad+%5Clambda_i+%5Cgeq+0+%5Cend%7Baligned%7D++%5C%5C

由等式约束优化问题极值的必要条件对其求解,联立方程:

equation?tex=%5Cleft%5C%7B+%5Cbegin%7Baligned%7D++%5Cfrac%7B%5Cpartial+L%7D%7B%5Cpartial+w_i%7D+%26%3D+%5Cfrac%7B%5Cpartial+f%7D%7B%5Cpartial+w_i%7D+%2B+%5Csum%5Climits_%7Bi%3D1%7D%5E%7Bn%7D+%5Clambda_i+%5Cfrac%7B%5Cpartial+g_i%7D%7B%5Cpartial+w_i%7D%3D+0%2C+%5C%5C+%5Cfrac%7B%5Cpartial+L%7D%7B%5Cpartial+a_i%7D+%26%3D+2+%5Clambda_i+a_i+%3D+0%2C+%5C%5C+%5Cfrac%7B%5Cpartial+L%7D%7B%5Cpartial+%5Clambda_i%7D+%26%3D+g_i%28w%29+%2B+a_i%5E2+%3D+0%2C+%5C%5C+%5Clambda_i+%26%5Cgeq+0+%5Cend%7Baligned%7D+%5Cright.+%5C%5C

(为什么取 equation?tex=%5Clambda_i+%5Cgeq+0 ,可以通过几何性质来解释,有兴趣的同学可以查下 KKT 的证明)。

针对 equation?tex=%5Clambda_i+a_i+%3D+0 我们有两种情况:

情形一: equation?tex=+%5Clambda_i+%3D+0%2C+a_i+%5Cneq+0

由于 equation?tex=%5Clambda_i%3D0 ,因此约束条件 equation?tex=g_i%28w%29 不起作用,且 equation?tex=g_i%28w%29%3C0

情形二: equation?tex=%5Clambda_i+%5Cneq+0%2C+a_i+%3D+0

此时 equation?tex=g_i%28w%29%3D0 且 equation?tex=%5Clambda_i%3E0 ,可以理解为约束条件 equation?tex=g_i%28w%29 起作用了,且 equation?tex=g_i%28w%29%3D0

综合可得: equation?tex=%5Clambda_ig_i%28w%29%3D0 ,且在约束条件起作用时 equation?tex=%5Clambda_i%3E0%2Cg_i%28w%29%3D0 ;约束不起作用时 equation?tex=%5Clambda_i+%3D+0%2Cg_i%28w%29+%3C+0

由此方程组转换为:

equation?tex=%5Cleft%5C%7B+%5Cbegin%7Baligned%7D++%5Cfrac%7B%5Cpartial+L%7D%7B%5Cpartial+w_i%7D+%26%3D+%5Cfrac%7B%5Cpartial+f%7D%7B%5Cpartial+w_i%7D+%2B+%5Csum%5Climits_%7Bj%3D1%7D%5E%7Bn%7D+%5Clambda_j+%5Cfrac%7B%5Cpartial+g_j%7D%7B%5Cpartial+w_i%7D%3D+0%2C+%5C%5C+%5Clambda_ig_i%28w%29+%26%3D+0%2C+%5C%5C+g_i%28w%29%26%5Cleq+0+%5C%5C+%5Clambda_i+%26%5Cgeq+0+%5Cend%7Baligned%7D+%5Cright.+%5C%5C

以上便是不等式约束优化优化问题的 KKT(Karush-Kuhn-Tucker) 条件, equation?tex=%5Clambda_i 称为 KKT 乘子。

这个式子告诉了我们什么事情呢?

直观来讲就是,支持向量 equation?tex=g_i%28w%29%3D0 ,所以 equation?tex=%5Clambda_i+%3E+0 即可。而其他向量 equation?tex=g_i%28w%29%3C0%2C+%5Clambda_i%3D0 。

我们原本问题时要求: equation?tex=min+%5Cfrac%7B1%7D%7B2%7D+%7C%7Cw%7C%7C%5E2 ,即求 equation?tex=minL%28w%2C%5Clambda%2Ca%29

equation?tex=%5Cbegin%7Baligned%7D++L%28w%2C%5Clambda%2Ca%29+%26%3D+%7Bf%28w%29%7D+%2B+%5Csum%5Climits_%7Bi+%3D+1%7D%5En+%5Clambda_i+%5Bg_i%28w%29+%2B+a_i%5E2%5D+%5Cquad+%5C%5C+%26%3D+%7Bf%28w%29%7D+%2B+%5Csum%5Climits_%7Bi+%3D+1%7D%5En+%5Clambda_i+g_i%28w%29+%2B+%5Csum%5Climits_%7Bi+%3D+1%7D%5En+%5Clambda_i+a_i%5E2+%5Cend%7Baligned%7D+%5C%5C

由于 equation?tex=%5Csum%5Climits_%7Bi+%3D+1%7D%5En+%5Clambda_i+a_i%5E2+%5Cgeq+0 ,故我们将问题转换为: equation?tex=minL%28w%2C%5Clambda%29 :

equation?tex=L%28w%2C%5Clambda%29%3D%7Bf%28w%29%7D+%2B+%5Csum%5Climits_%7Bi+%3D+1%7D%5En+%5Clambda_i+g_i%28w%29++%5C%5C

假设找到了最佳参数是的目标函数取得了最小值 p。即 equation?tex=%5Cfrac%7B1%7D%7B2%7D+%7C%7Cw%7C%7C%5E2+%3Dp 。而根据 equation?tex=%5Clambda_%7Bi%7D+%5Cgeq+0 ,可知 equation?tex=%5Csum%5Climits_%7Bi+%3D+1%7D%5En+%5Clambda_i+g_i%28w%29+%5Cleq+0 ,因此 equation?tex=L%28w%2C%5Clambda%29++%5Cleq+p ,为了找到最优的参数 equation?tex=%7B%5Clambda%7D ,使得 equation?tex=L%28w%2C%5Clambda%29 接近 p,故问题转换为出 equation?tex=%5Cmax%5Climits_%7B%5Clambda%7DL%28w%2C%5Clambda%29 。

故我们的最优化问题转换为:

equation?tex=%5Cmin%5Climits_w+%5Cmax%5Climits_%7B%5Clambda%7D+L%28w%2C%5Clambda%29+%5C%5C++s.t.+%5Cquad+%5Clambda_i+%5Cgeq+0+%5C%5C

出了上面的理解方式,我们还可以有另一种理解方式: 由于 equation?tex=%5Clambda_i+%5Cgeq+0 ,

equation?tex=%5Cmax%5Climits_%7B%5Clambda%7D+L%28w%2C+%5Clambda%29+%3D++%5Cleft%5C%7B+%5Cbegin%7Baligned%7D+%5Cinfty++%5Cquad+g_i%28w%29+%5Cgeq+0+%5C%5C+%5Cfrac%7B1%7D%7B2%7D+%7B%7C%7Cw%7C%7C%5E2%7D+%5Cquad+g_i%28w%29+%5Cleq+0+%5Cend%7Baligned%7D+%5Cright.+%5C%5C

所以 equation?tex=%5Cmin%28%5Cinfty%2C++%5Cfrac%7B1%7D%7B2%7D+%7B%7C%7Cw%7C%7C%5E2%7D%29+%3D+%5Cfrac%7B1%7D%7B2%7D+%7B%7C%7Cw%7C%7C%5E2%7D ,所以转化后的式子和原来的式子也是一样的。

更多内容,可以看这个大佬的介绍,超详细。https://zhuanlan.zhihu.com/p/77750026