반응형
Logistic Regression
- Computing Hypothesis
- Computing Cost Function
- Evaluation
- Higher Implementation
Logistic Regression
Hypothesis
$${H(X) = \frac{1}{1 + e^{-W^TX}}}$$
Cost
$${cost(W) = -\frac{1}{m}\sum ylog(H(x)) + (1-y)(log(1-H(x)))}$$
- if ${y \simeq H(x)}$, cost is near 0
- if ${y \ne H(x)}$, cost is high
- ${P(x = 1) = 1- P(X = 0)}$
- X : ${m\times d}$ 차원, W : ${d\times 1}$ 차원
- sigmoid 함수를 이용하여 0과 1에 근사하는 값을 만듬
- sigmoid 함수란? - 무한대는 0, + 무한대는 1에 가까운 값을 갖게 해주는 함수
Gradient Descent
${W := W - \alpha\frac{\partial}{\partial W}cost(W)}$
- ${\alpha}$ : learning rate
실습
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# For reproducibility
torch.manual_seed(1)
x_data = [[1, 2], [2, 3], [3, 1], [4, 3], [5, 3], [6, 2]] ## 6 x 2
y_data = [[0],[0],[0], [1],[1],[1]] ## 6 x 1
x_train = torch.FloatTensor(x_data)
y_train = torch.FloatTensor(y_data)
print(x_train.shape)
print(y_train.shape)
>>>>
torch.Size([6, 2])
torch.Size([6, 1])
Computing the Hypothesis
$${H(X) = \frac{1}{1 + e^{-W^TX}}}$$
pytorch에서 torch.exp()를 사용해서 지수함수를 사용할 수 있다.
print('e^1 equals : ', torch.exp(torch.FloatTensor([1])))
>>>
e^1 equals : tensor([2.7183])
우리는 hyothesis function을 계산하는 함수를 사용할 수 있다.
W = torch.zeros((2, 1), requires_grad = True)
b = torch.zeros(1, requires_grad = True)
hypothesis = 1 / (1 + torch.exp(-(x_train.matmul(W) + b)))
print(hypothesis)
print(hypothesis.shape)
>>>
tensor([[0.5000],
[0.5000],
[0.5000],
[0.5000],
[0.5000],
[0.5000]], grad_fn=<MulBackward0>)
torch.Size([6, 1])
torch에서 torch.sigmoind()로 시그모이드 함수를 제공하고 있다.
print('1/(1 + e^{-1}) equals: ', torch.sigmoid(torch.FloatTensor([1])))
>>>
1/(1 + e^{-1}) equals: tensor([0.7311])
hypothesis = torch.sigmoid(x_train.matmul(W) + b)
print(hypothesis)
print(hypothesis.shape)
>>>
tensor([[0.5000],
[0.5000],
[0.5000],
[0.5000],
[0.5000],
[0.5000]], grad_fn=<SigmoidBackward>)
torch.Size([6, 1])
Computing the Cost function
$${cost(W) = -\frac{1}{m}\sum ylog(H(x)) + (1-y)(log(1-H(x)))}$$
우리는 hypothesis와 y_train의 차이를 측정해야 한다.
하나의 element만 본다면, cost는 아래와 같다.
-(y_train[0] * torch.log(hypothesis[0]) +
(1 - y_train[0]) * torch.log(1 - hypothesis[0]))
>>>
tensor([0.6931], grad_fn=<NegBackward>)
전체 샘플에 대해 표현하면 아래와 같다.
losses = -(y_train * torch.log(hypothesis) +
(1 - y_train) * torch.log(1 - hypothesis))
print(losses)
>>>
tensor([[0.6931],
[0.6931],
[0.6931],
[0.6931],
[0.6931],
[0.6931]], grad_fn=<NegBackward>)
.mean()을 이용해 평균을 구한다.
cost = losses.mean()
print(cost)
>>>
tensor(0.6931, grad_fn=<MeanBackward0>)
위의 수식을 한 문장으로 해결
F.binary_cross_entropy(hypothesis, y_train)
Whole Training Procedure
##모델 초기화
W = torch.zeros((2, 1), requires_grad = True)
b = torch.zeros(1, requires_grad = True)
#optimizer 실행
optimizer = optim.SGD([W, b], lr = 1)
nb_epochs = 1000
for epoch in range(nb_epochs + 1):
# Cost 계산
hypothesis = torch.sigmoid(x_train.matmul(W) + b)
cost = F.binary_cross_entropy(hypothesis, y_train)
# cost로 H(x) 계산
optimizer.zero_grad()
cost.backward()
optimizer.step()
# 100 번마다 출력
if epoch % 100 == 0 :
print('epoch {:4d}/{} Cost : {:.6f}'.format(
epoch, nb_epochs, cost.item()
))
>>>
epoch 0/1000 Cost : 0.693147
epoch 100/1000 Cost : 0.134722
epoch 200/1000 Cost : 0.080643
epoch 300/1000 Cost : 0.057900
epoch 400/1000 Cost : 0.045300
epoch 500/1000 Cost : 0.037261
epoch 600/1000 Cost : 0.031672
epoch 700/1000 Cost : 0.027556
epoch 800/1000 Cost : 0.024394
epoch 900/1000 Cost : 0.021888
epoch 1000/1000 Cost : 0.019852
Evaluation
모델을 학습시킨 이후에 모델의 정확도를 확인하기 위해 검증한다.
hypothesis = torch.sigmoid(x_train.matmul(W) + b)
print(hypothesis)
>>>
tensor([[2.7648e-04],
[3.1608e-02],
[3.8977e-02],
[9.5622e-01],
[9.9823e-01],
[9.9969e-01]], grad_fn=<SigmoidBackward>)
prediction = hypothesis >= torch.FloatTensor([0.5]) ## 0.5보다 크면 1, 아니면 0으로 변환
print(prediction.float())
correct_prediction = prediction.float() == y_train
print(correct_prediction)
>>>
tensor([[True],
[True],
[True],
[True],
[True],
[True]])
Higher Implementation with Class
실제 구현은 다음과 같은 방식으로 구현이 될 것이다.
class BinaryClassifier(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(2, 1) # W , b (x 데이터의 개수는 몰라도 weight가 8개라는 것은 알 수 있다.)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
return self.sigmoid(self.linear(x))
model = BinaryClassifier()
# optimizer 설정
optimizer = optim.SGD(model.parameters(), lr = 1)
nb_epochs = 100
for epoch in range(nb_epochs + 1):
# H(x) 계산
hypothesis = model(x_train)
# cost 계산
cost = F.binary_cross_entropy(hypothesis, y_train)
# cost로 H(x) 계산
optimizer.zero_grad()
cost.backward()
optimizer.step()
# 10 번마다 출력
if epoch % 10 == 0 :
prediction = hypothesis >= torch.FloatTensor([0.5])
correct_preidction = prediction.float() == y_train
accuracy = correct_prediction.sum().item() / len(correct_prediction)
print('Epoch {:4d}/{} Cost : {:.6f} Accuracy {:2.2f}%'.format(
epoch, nb_epochs, cost.item(), accuracy * 100
))
>>>
Epoch 0/100 Cost : 0.539713 Accuracy 100.00%
Epoch 10/100 Cost : 0.614853 Accuracy 100.00%
Epoch 20/100 Cost : 0.441875 Accuracy 100.00%
Epoch 30/100 Cost : 0.373145 Accuracy 100.00%
Epoch 40/100 Cost : 0.316358 Accuracy 100.00%
Epoch 50/100 Cost : 0.266094 Accuracy 100.00%
Epoch 60/100 Cost : 0.220498 Accuracy 100.00%
Epoch 70/100 Cost : 0.182095 Accuracy 100.00%
Epoch 80/100 Cost : 0.157299 Accuracy 100.00%
Epoch 90/100 Cost : 0.144091 Accuracy 100.00%
Epoch 100/100 Cost : 0.134272 Accuracy 100.00%
출처 : www.boostcourse.org/ai214/lecture/42289
반응형
'Study > DL_Basic' 카테고리의 다른 글
[파이토치로 시작하는 딥러닝 기초]07_MLE, Overfitting, Regularization, Learning Rate (0) | 2020.12.28 |
---|---|
[파이토치로 시작하는 딥러닝 기초]06_Softmax Classification (0) | 2020.12.28 |
[파이토치로 시작하는 딥러닝 기초]04.02_Loading Data (0) | 2020.12.21 |
[파이토치로 시작하는 딥러닝 기초]04.01_Multivariable_Linear_regression (0) | 2020.12.21 |
[파이토치로 시작하는 딥러닝 기초]03_Deeper Look at Gradient Descent (0) | 2020.12.21 |