2
4

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 3 years have passed since last update.

KerasとPyTorchで畳み込みニューラルネットワーク(CNN)を実装して比較する

Last updated at Posted at 2021-07-15

 Pythonの2大深層学習ライブラリKerasとPyTorchで畳み込みニューラルネットワークを実装し、CIFAR-10というデータセットを画像分類して比較してみたいと思います。

 (※この記事は勉強した内容のアウトプットを目的としてざっくりと書かれているので、もともと知っていて復習の為に読んだりするにはいいと思いますが、正しく知りたい方は別の記事を読んだほうが良いとおもわれます。コードに関しては必要なライブラリが入っていれば、jupyterなどにコピペでだいたい動くはずです。)

###畳み込みニューラルネットワーク(CNN)

 CNNは主に3つの層からで構成されています。入力画像に対しては、畳み込み層、プーリング層によって特徴の抽出を行います。最後に全結合層を経て出力を行います。それ以外にも過学習を抑制するバッチ正規化層、ドロップアウト層などがあります。

qiita_20210711_3.png

###CIFAR-10
 CIFAR-10は、主に画像認識を目的としたディープラーニング/機械学習の研究や初心者向けチュートリアルで使われているデータセットです。

qiita_20210714_1.png

CIFAR-10データセット全体は、
・5万枚の訓練データ用(画像とラベル)
・1万枚のテストデータ用(画像とラベル)
・合計6万枚
で構成されています。

 各画像のフォーマットは24bit RGBフルカラー画像:RGB(赤色/緑色/青色)3色の組み合わせで、それぞれ「0」~「255」の256段階、幅32×高さ32ピクセル: 1つ分のデータが基本的に(3, 32, 32)もしくは(32, 32, 3)(=計3072要素)という多次元配列の形状となっており、最初もしくは最後の次元にある3要素がRGB値となっています。

###作成する畳み込みニューラルネットワーク

qiita_20210714_3png.png

###Kerasによる実装
ライブラリのインポート

import numpy as np

from keras.models import Sequential
from keras.layers import Conv2D, MaxPool2D, Dense, Dropout, Flatten
from keras.layers.normalization import BatchNormalization
from keras.datasets import cifar10
from keras.utils.np_utils import to_categorical

CIFER10の読み込み(および学習データの標準化と正解ラベルのone-hotエンコーディング)

(x_train, y_train),(x_test, y_test) = cifar10.load_data()

x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0

y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)

学習モデルの作成

model = Sequential()

model.add(Conv2D(filters=32, kernel_size=(3, 3), strides=(1, 1), padding='same', 
                 activation='relu', input_shape=(32, 32, 3)))
model.add(Conv2D(filters=32, kernel_size=(3, 3), strides=(1, 1), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(filters=64, kernel_size=(3, 3), strides=(1, 1), padding='same', 
                 activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3, 3), strides=(1, 1), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))

model.summary()

出力結果

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 32, 32, 32)        896       
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 30, 30, 32)        9248      
_________________________________________________________________
batch_normalization (BatchNo (None, 30, 30, 32)        128       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 15, 15, 32)        0         
_________________________________________________________________
dropout (Dropout)            (None, 15, 15, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 15, 15, 64)        18496     
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 13, 13, 64)        36928     
_________________________________________________________________
batch_normalization_1 (Batch (None, 13, 13, 64)        256       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 6, 6, 64)          0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 6, 6, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 2304)              0         
_________________________________________________________________
dense (Dense)                (None, 512)               1180160   
_________________________________________________________________
dropout_2 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 10)                5130      
=================================================================
Total params: 1,251,242
Trainable params: 1,251,050
Non-trainable params: 192
_________________________________________________________________

学習と結果の表示

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x=x_train, y=y_train, batch_size=64, epochs=10, validation_data=(x_test, y_test))

出力結果

Epoch 1/10
782/782 [==============================] - 155s 198ms/step - loss: 1.6193 - accuracy: 0.4317 - val_loss: 1.2944 - val_accuracy: 0.5414
Epoch 2/10
782/782 [==============================] - 154s 197ms/step - loss: 1.2168 - accuracy: 0.5705 - val_loss: 1.0971 - val_accuracy: 0.6140
Epoch 3/10
782/782 [==============================] - 154s 196ms/step - loss: 1.0295 - accuracy: 0.6410 - val_loss: 0.9070 - val_accuracy: 0.6951
Epoch 4/10
782/782 [==============================] - 145s 185ms/step - loss: 0.9069 - accuracy: 0.6856 - val_loss: 0.8581 - val_accuracy: 0.6995
Epoch 5/10
782/782 [==============================] - 149s 191ms/step - loss: 0.8238 - accuracy: 0.7144 - val_loss: 0.8277 - val_accuracy: 0.7170
Epoch 6/10
782/782 [==============================] - 151s 193ms/step - loss: 0.7618 - accuracy: 0.7331 - val_loss: 0.7847 - val_accuracy: 0.7326
Epoch 7/10
782/782 [==============================] - 149s 191ms/step - loss: 0.7099 - accuracy: 0.7513 - val_loss: 0.7822 - val_accuracy: 0.7399
Epoch 8/10
782/782 [==============================] - 152s 194ms/step - loss: 0.6700 - accuracy: 0.7665 - val_loss: 0.6998 - val_accuracy: 0.7591
Epoch 9/10
782/782 [==============================] - 157s 201ms/step - loss: 0.6261 - accuracy: 0.7820 - val_loss: 0.7198 - val_accuracy: 0.7592
Epoch 10/10
782/782 [==============================] - 151s 193ms/step - loss: 0.5909 - accuracy: 0.7940 - val_loss: 0.7151 - val_accuracy: 0.7684
313/313 [==============================] - 7s 21ms/step - loss: 0.7151 - accuracy: 0.7684

###PyTorchによる実装
ライブラリのインポート

from torchvision.datasets import CIFAR10
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import numpy as np
import matplotlib.pyplot as plt
from torchsummary import summary
import torch.nn as nn
from torch import optim

データセットの読み込みとデータローダーの作成

normalize = transforms.Normalize((0.0, 0.0, 0.0), (1.0, 1.0, 1.0))
to_tensor = transforms.ToTensor()

transform_train = transforms.Compose([to_tensor, normalize])
transform_test = transforms.Compose([to_tensor, normalize])

cifar10_train = CIFAR10("./data", train=True, download=True, transform=transform_train)
cifar10_test = CIFAR10("./data", train=False, download=True, transform=transform_test)

batch_size = 64
train_loader = DataLoader(cifar10_train, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(cifar10_test, batch_size=len(cifar10_test), shuffle=False)

モデルの作成

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv3_32 = nn.Conv2d(3, 32, 3, padding=(1,1), padding_mode='replicate')
        self.conv32_32 = nn.Conv2d(32, 32, 3)
        self.conv32_64 = nn.Conv2d(32, 64, 3, padding=(1,1), padding_mode='replicate')
        self.conv64_64 = nn.Conv2d(64, 64, 3)
        self.pool = nn.MaxPool2d((2, 2))
        self.dropout025 = nn.Dropout(0.25)
        self.dropout050 = nn.Dropout(0.50)
        self.bn32 = nn.BatchNorm2d(32)
        self.bn64 = nn.BatchNorm2d(64)
        self.fc1 = nn.Linear(64*6*6, 512)
        self.fc2 = nn.Linear(512, 10)
        self.relu = nn.ReLU()

    def forward(self, x):
        x = self.relu(self.conv3_32(x))
        x = self.relu(self.conv32_32(x))
        x = self.bn32(x)
        x = self.pool(x)
        x = self.dropout025(x)
        x = self.relu(self.conv32_64(x))
        x = self.relu(self.conv64_64(x))
        x = self.bn64(x)
        x = self.pool(x)
        x = self.dropout025(x)
        x = x.view(-1, 64*6*6)
        x = self.fc1(x)
        x = self.relu(x)
        x = self.dropout050(x)
        x = self.fc2(x)
        return x

net = Net()
summary(net,(3,32,32))

出力結果

----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1           [-1, 32, 32, 32]             896
              ReLU-2           [-1, 32, 32, 32]               0
            Conv2d-3           [-1, 32, 30, 30]           9,248
              ReLU-4           [-1, 32, 30, 30]               0
       BatchNorm2d-5           [-1, 32, 30, 30]              64
         MaxPool2d-6           [-1, 32, 15, 15]               0
           Dropout-7           [-1, 32, 15, 15]               0
            Conv2d-8           [-1, 64, 15, 15]          18,496
              ReLU-9           [-1, 64, 15, 15]               0
           Conv2d-10           [-1, 64, 13, 13]          36,928
             ReLU-11           [-1, 64, 13, 13]               0
      BatchNorm2d-12           [-1, 64, 13, 13]             128
        MaxPool2d-13             [-1, 64, 6, 6]               0
          Dropout-14             [-1, 64, 6, 6]               0
           Linear-15                  [-1, 512]       1,180,160
             ReLU-16                  [-1, 512]               0
          Dropout-17                  [-1, 512]               0
           Linear-18                   [-1, 10]           5,130
================================================================
Total params: 1,251,050
Trainable params: 1,251,050
Non-trainable params: 0

学習と結果の表示

loss_func = nn.CrossEntropyLoss()

optimizer = optim.Adam(net.parameters())

x_test, t_test = iter(test_loader).next()

for i in range(10):
    net.train()
    loss_train = 0
    correct = 0
    total = 0
    for j, (x, t) in enumerate(train_loader):
        y = net(x)
        loss = loss_func(y, t)
        loss_train += loss.item()
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    loss_train /= j + 1

    net.eval()
    y_test = net(x_test)
    loss_test = loss_func(y_test, t_test).item()
    correct += (y_test.argmax(1) == t_test).sum().item()
    total += len(x_test)
    acc_test = correct/total
    
    print("Epoch:", i, "Loss_Train:", loss_train, "Loss_Test:", loss_test, "acc_test:", acc_test)

出力結果

Epoch: 0 Loss_Train: 1.4037733696153403 Loss_Test: 1.14847993850708 acc_test: 0.6034
Epoch: 1 Loss_Train: 1.0341440180835821 Loss_Test: 0.9750216603279114 acc_test: 0.6674
Epoch: 2 Loss_Train: 0.8731644713055448 Loss_Test: 0.9409785270690918 acc_test: 0.6836
Epoch: 3 Loss_Train: 0.7814018539226878 Loss_Test: 0.9034271240234375 acc_test: 0.708
Epoch: 4 Loss_Train: 0.7162514436046791 Loss_Test: 0.7007519006729126 acc_test: 0.7612
Epoch: 5 Loss_Train: 0.6576254427661676 Loss_Test: 0.6346898078918457 acc_test: 0.781
Epoch: 6 Loss_Train: 0.6264099999690604 Loss_Test: 0.6972277164459229 acc_test: 0.7689
Epoch: 7 Loss_Train: 0.5791886984692205 Loss_Test: 0.6604976654052734 acc_test: 0.781
Epoch: 8 Loss_Train: 0.5507333390319439 Loss_Test: 0.6652767062187195 acc_test: 0.7801
Epoch: 9 Loss_Train: 0.5095748831434628 Loss_Test: 0.666159987449646 acc_test: 0.7781

###まとめ
 KerasとPytorchによる畳込みニューラルネットワークの実装を行ってみました。おそらくどちらも10エポックで精度80%弱程度で収束していると思います。

 両方で実装した感想としては、ややKerasの方がコード量が少なく、学習するだけで経過も表示してくれる分、初心者にも優しいかなと思いました。あと、Kerasでは過学習対策にEarly Stoppingを設定していたのですが、Pytorchで同様の設定がなく自分で設定する必要があったので、揃える為にKeras側でもその部分は削除しました。このようにKerasでは何もしなくてもいろいろやってくれますが、Pytorchではそれらを実装する必要がありそうです。ただ、Pytorchではその分、自分でいろいろカスタマイズできるのではないかと思っています。

2
4
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
2
4

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?