机器学习简单实验(梯度下降)

用梯度下降算法实现上一篇的简单分类:

# -*- coding: utf-8 -*-
# easy_linear_learning2.py
import numpy as np
b = 1
a = 0.5
x = np.array([[1,1,3],[1,2,3],[1,1,8],[1,2,15]])
d = np.array([1,1,-1,-1])
w = np.array([b,0,0])
error = 0
ddcount = 50
def sgn(v):
    if v>0:
        return 1
    else:
        return -1
def comy(myw,myx):
    return sgn(np.dot(myw.T,myx))
def gradient(myw,myx,mya):
    i = 0
    sum_x = np.array([0,0,0])
    for xn in myx:
        if comy(myw,xn) != d[i]:
            sum_x += d[i]*xn
        i += 1
    return mya*sum_x
i = 0
while True:
    grad = gradient(w,x,a)
    w = w + grad
    i = i+1
    if abs(grad.sum())<=error or i>=ddcount:break
test = np.array([1,9,19])
print "%d  %d => %d "% (test[1],test[2],comy(w,test))
test = np.array([1,3,22])
print "%d  %d => %d "% (test[1],test[2],comy(w,test)
In [2]:

grad

Out[2]:

array([ 0.,  0.,  0.])

In [3]:

grad.sum()

Out[3]:

0.0

In [4]:

i

Out[4]:

6

可以看到,6次迭代后grad.sum()就已经为0,最后分类结果正确:

9  19 => 1 
3  22 => -1 

版权声明:本文为qq_23851075原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。