pytorch 实现 Logistic Regression

Sigmoid函数

公式::

 y = \frac 1 {1+e^{-x}}

导数:

 y{}' = (1+e^{-x}){}' * (e^{-x}){}' = -(\cfrac {1} {1+e^{-x}})^{2} * (-e^{-x}) = y*(1-y)

函数图像:

  


逻辑回归

Logistic回归的模型形式和线性回归一样, 都是 y = wx + b, 其中x 可以是一个多维的特征, 唯一不同的地方是 Logistic 回归会对y作用一个logistic函数,将其变为一种概率的结果。

numpy 实现 逻辑回归

 

pytorch实现 逻辑回归(二分类)

# pytorch 实现 逻辑回归
import math
from matplotlib import pyplot as plt
import pandas as pd
import numpy
import visdom

# sigmoid函数
def sigmoid(x):
    return 1 / (1 + math.exp(-x))


import torch
from torch import nn, optim
from torch.autograd import Variable

df = pd.read_csv('./data/logistic_regression/data.csv', header=None)

for index, row in df.iterrows():
    plt.scatter(row[0], row[1], c=('r' if int(row[2]) == 0 else 'b'))
plt.show()

x_train = numpy.array(df.iloc[:, 0:-1], dtype=numpy.float32)
y_train = numpy.array(df.iloc[:, -1], dtype=numpy.float32)

x_train = torch.from_numpy(x_train)
y_train = torch.from_numpy(y_train)

class LogisticRegression(nn.Module):
    """
    线性 + sigmoid
    """
    def __init__(self):
        super(LogisticRegression, self).__init__()
        self.linear = nn.Linear(2, 1) # 二维特征
        self.sigmoid = nn.Sigmoid()

    def forward(self, x):
        x = self.linear(x)
        y = self.sigmoid(x)
        return y


lg_model = LogisticRegression()

# 定义交叉熵损失函数
criterion = nn.BCELoss()

optimizer = optim.SGD(lg_model.parameters(), lr=0.001)

epochs = 1000
loss_history = []

for epoch in range(epochs):
    y_predict = lg_model(x_train)
    loss = criterion(y_predict, y_train)
    loss_history.append(loss)
    # 梯度归零
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    print("Epoch: {}. Loss: {:.6f}.".format(epoch + 1, loss))

plt.plot([i for i in range(epochs)], loss_history)
plt.savefig('lr_loss.jpg')
plt.show()
print('train end')

模型预测出的是0-1的概率,如果小于0.5则是0分类, 大于0.5则是1分类 

注:Cmd Markdown 公式指导手册: https://www.zybuluo.com/codeep/note/163962


版权声明:本文为LYouthzzz原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。