LoginSignup
13
14

More than 5 years have passed since last update.

先人の失敗や知恵を一通り試してディープラーニングのハイパーパラメーターを理解してみる

Posted at

MNIST

まずはデータのインポートと確認

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from keras.datasets import mnist

(X_train, y_train), (X_test, y_test) = mnist.load_data()
nb_classes = 10

print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
print(X_train.dtype, X_test.dtype, y_train.dtype, y_test.dtype)

実行結果

(60000, 28, 28) (10000, 28, 28) (60000,) (10000,)
uint8 uint8 uint8 uint8

モデル入力・検証用にデータを変形

X_train = X_train.reshape(-1, 28*28).astype('float64')
X_test = X_test.reshape(-1, 28*28).astype('float64')
X_train /= 255
X_test /= 255
y_train = np_utils.to_categorical(y_train, nb_classes)
y_test = np_utils.to_categorical(y_test, nb_classes)

print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
print(X_train.dtype, X_test.dtype, y_train.dtype, y_test.dtype)

実行結果

(60000, 784) (10000, 784) (60000, 10) (10000, 10)
float64 float64 float64 float64

バッチ勾配降下法

from keras.models import Sequential
from keras.layers.core import Dense, Activation

import time

class Evaluate(Callback):
    epoch = 0
    def on_epoch_end(self, logs={}, acc=None):
        self.epoch += 1
        if self.epoch % 10 == 0:
            score = self.model.evaluate(X_test, y_test, batch_size=10000)
            print('test - loss: {loss:.4f} - acc: {accuracy:.4f}'.format(loss=score[0], accuracy=score[1]))

model = Sequential()

model.add(Dense(nb_classes, input_dim=28*28))
model.add(Activation('softmax'))

model.compile(optimizer='sgd', loss='mean_squared_error', metrics=['accuracy'])
evaluate = Evaluate()
begin = time.clock()
model.fit(X_train, y_train, nb_epoch=1, batch_size=60000, verbose=2, callbacks=[evaluate])
print('Time elapsed: %.0f' % (time.clock() - begin))

実行結果

Train accuracy: 0.0983
Test accuracy:  0.0928
Time: 1.7s/epoch

epoch=1では当然ほとんど学習できないので、100に増やしてみる

実行結果

Train accuracy: 0.1058
Test accuracy:  0.1078
Time: 1.4s/epoch

学習率を調整していないのもあるが、こんな調子で収束するのを待っていられないので確率的勾配降下法に切り替える

確率的勾配降下法

model.fit(X_train, y_train, nb_epoch=100, batch_size=256, verbose=2, callbacks=[evaluate])

実行結果

Train accuracy: 0.8547
Test accuracy:  0.8669
Time: 1.7s/epoch

まだこれくらいでは収束していないしパラメーターも調整できるが、optimizerをAdamに切り替えてみる

model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])

実行結果

Train accuracy: 0.9504
Test accuracy:  0.9329
Time: 1.8s/epoch

収束までの時間が早くなった
epoch=1000にしてみる

実行結果

Train accuracy: 0.9661
Test accuracy:  0.9277
Time: 1.8s/epoch

さすがに限界のようなので、モデルを複雑化していくことにする

ニューラルネットワーク

epochを100に戻して1層だけ中間層を追加してみる
とりあえずノード数は出力層にあわせる
ついでにここから実行環境をAWSのg2インスタンスに移行する

model.add(Dense(10, input_dim=28*28))
model.add(Activation('sigmoid'))

model.add(Dense(nb_classes))
model.add(Activation('softmax'))

実行結果

Train accuracy: 0.9602
Test accuracy:  0.9408
Time: 4s/epoch

ノード数を増やしてみる

実行結果

ノード数256

Train accuracy: 0.9995
Time elapsed:   364.551929
Test accuracy:  0.9793

ノード数512

Train accuracy: 0.9995
Test accuracy:  0.9831
Time: 4s/epoch

そろそろMNISTだと違いが分かりにくくなってくるので、題材をCIFAR10にしてみる(nb_epoch = 1000, batch_size = 128)
ついでにEarlyStoppingも導入

CIFAR10

from keras.datasets import cifar10
from keras.optimizers import Adam
from keras.callbacks import EarlyStopping

(X_train, y_train), (X_test, y_test) = cifar10.load_data()

X_train = X_train.reshape(-1, 3*32*32).astype('float64')
X_test = X_test.reshape(-1, 3*32*32).astype('float64')

model.add(Dense(256, input_dim=3*32*32))
model.add(Activation('sigmoid'))

model.add(Dense(nb_classes))
model.add(Activation('softmax'))

model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
stop = EarlyStopping(monitor='val_loss', patience=50)
model.fit(X_train, y_train, nb_epoch = 1000, batch_size = 128, verbose=2, callbacks=[evaluate, stop], validation_split=0.2)

実行結果

Train accuracy: 0.9183
Test accuracy:  0.4567
Time: 8s/epoch

過学習しているみたいなので、ドロップアウトを追加する

from keras.layers.core Dropout

model.add(Dense(256, input_dim=3*32*32))
model.add(Activation('sigmoid'))
model.add(Dropout(0.5))

実行結果

Train accuracy: 0.6638
Test accuracy:  0.5038
Time: 8s/epoch

ディープニューラルネットワーク

層を追加してみる

n = 20
drop = 1.0 - np.power(0.5, 1.0 / n)
for i in range(n):
    if i == 0:
        model.add(Dense(256, input_dim=3*32*32))
    else:
        model.add(Dense(256))
    model.add(Activation('sigmoid'))
    model.add(Dropout(drop))

実行結果

Train accuracy: 0.0969
Test accuracy:  0.1000
Time: 15s/epoch

sigmoidだと勾配消失するので、活性化関数をELUに替えてみる

from keras.layers.advanced_activations import ELU

model.add(ELU())

実行結果

Train accuracy: 0.2513
Test accuracy:  0.2357
Time: 15s/epoch

ここで畳み込みを試してみる

畳み込みニューラルネットワーク

from keras.layers.core import Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D

X_train = X_train.astype('float64')
X_test = X_test.astype('float64')

drop = 1.0 - np.power(0.5, 1.0 / 2)

model.add(Convolution2D(nb_filter=8, nb_row=3, nb_col=3, border_mode='same', input_shape=(3, 32, 32)))
model.add(ELU())
model.add(Dropout(drop))
model.add(MaxPooling2D(pool_size=(2, 2), border_mode='same'))

model.add(Flatten())

model.add(Dense(256))
model.add(ELU())
model.add(Dropout(drop))

model.add(Dense(nb_classes))
model.add(Activation('softmax'))

実行結果

Train accuracy: 0.8167
Test accuracy:  0.6384
Time: 20s/epoch

ZCA Whitening・Batch Normalization(Local Response Normalization?)・Cross Entropy Lossを別々に試してみる

ZCA Whitening

時間がかかるので複数回試す場合は事前に準備しておいたほうがいい

whitening.py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from keras.datasets import cifar10
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
import numpy as np

nb_classes = 10

(X_train, y_train), (X_test, y_test) = cifar10.load_data()
print('Data loaded')

X_train = X_train.astype('float64')
X_test = X_test.astype('float64')
X_train /= 255
X_test /= 255
y_train = np_utils.to_categorical(y_train, nb_classes)
y_test = np_utils.to_categorical(y_test, nb_classes)

datagen_train = ImageDataGenerator(zca_whitening=True)
datagen_train.fit(X_train)
train_data = None
for X, _ in datagen_train.flow(X_train, y_train, batch_size=1000):
    if train_data is None:
        train_data = X
    else:
        train_data = np.append(train_data, X, axis=0)
    if len(train_data) >= len(X_train):
        break

print('Train data whitened')
np.save('X_train', X_train)
np.save('y_train', y_train)

datagen_test = ImageDataGenerator(zca_whitening=True)
datagen_test.fit(X_test)
test_data = None
for X, _ in datagen_test.flow(X_test, y_test, batch_size=1000):
    if test_data is None:
        test_data = X
    else:
        test_data = np.append(test_data, X, axis=0)
    if len(test_data) >= len(X_test):
        break

print('Test data whitened')
np.save('X_test', X_test)
np.save('y_test', y_test)
X_train = np.load('X_train.npy')
y_train = np.load('y_train.npy')
X_test = np.load('X_test.npy')
y_test = np.load('y_test.npy')

実行結果

Train accuracy: 0.8088
Test accuracy:  0.6470
Time: 20s/epoch

Batch Normalization

from keras.layers.normalization import BatchNormalization

model.add(Convolution2D(nb_filter=8, nb_row=2, nb_col=2, border_mode='same', input_shape=(3, 32, 32)))
model.add(ELU())
model.add(MaxPooling2D(pool_size=(2, 2), border_mode='same'))
model.add(BatchNormalization())

model.add(Flatten())

model.add(Dense(256))
model.add(ELU())
model.add(Dropout(0.5))
model.add(BatchNormalization())

model.add(Dense(nb_classes))
model.add(Activation('softmax'))

実行結果

Train accuracy: 0.8212
Test accuracy:  0.6336
Time: 21s/epoch

Cross Entropy Loss

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

実行結果

Train accuracy: 0.7755
Test accuracy:  0.6462
Time: 20s/epoch

効果はよくわからないが、Batch Normalizationは収束が早くなるのかなという印象
とりあえず、3つとも適用した上で、畳み込みのパラメータを変更してみる

パラメーター調整

畳み込みフィルタサイズを5X5に変更

model.add(Convolution2D(nb_filter=8, nb_row=5, nb_col=5, border_mode='same', input_shape=(3, 32, 32)))
Train accuracy: 0.8593
Test accuracy:  0.6398
Time: 21s/epoch

畳み込みのフィルタを16に変更

model.add(Convolution2D(nb_filter=16, nb_row=3, nb_col=3, border_mode='same', input_shape=(3, 32, 32)))
Train accuracy: 0.8862
Test accuracy:  0.6327
Time: 35s/epoch

プーリングの前に畳み込みを1回増やす

for i in range(2):
    if i == 0:
        model.add(Convolution2D(nb_filter=8, nb_row=3, nb_col=3, border_mode='same', input_shape=(3, 32, 32)))
    else:
        model.add(Convolution2D(nb_filter=8, nb_row=3, nb_col=3, border_mode='same'))
    model.add(ELU())
    model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), border_mode='same'))
Train accuracy: 0.8972
Test accuracy:  0.6687
Time: 42s/epoch

畳み込み・プーリングのセットを1回増やす

for i in range(2):
    if i == 0:
        model.add(Convolution2D(nb_filter=8 * 2 ** i, nb_row=3, nb_col=3, border_mode='same', input_shape=(3, 32, 32)))
    else:
        model.add(Convolution2D(nb_filter=8 * 2 ** i, nb_row=3, nb_col=3, border_mode='same'))
    model.add(ELU())
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2, 2), border_mode='same'))
Train accuracy: 0.8754
Test accuracy:  0.6838
Time: 31s/epoch

全結合層のノード数を512に変更

model.add(Dense(512))
Train accuracy: 0.9260
Test accuracy:  0.6335
Time: 23s/epoch

全結合層を1回増やす

n = 2
drop = 1.0 - np.power(0.5, 1.0 / n)
for i in range(n):
    model.add(Dense(256))
    model.add(ELU())
    model.add(Dropout(drop))
    model.add(BatchNormalization())
Train accuracy: 0.9126
Test accuracy:  0.6325
Time: 24s

まとめ

解こうとする問題や元々のモデルにもよるので、効果のほうは参考にならない
計算時間を見て、どこから優先的に手をつけるのかを決める役には立つかも

効果(Test Accuracy) 計算時間
基準 (0.638) (20s)
フィルタサイズ拡大 × (0.640) ◎ (21s)
フィルタ数増加 × (0.633) × (35s)
畳み込み増加 ○ (0.669) × (42s)
プーリング増加 ◎ (0.684) ○ (31s)
全結合層ノード増加 × (0.634) ◎ (23s)
全結合層増加 × (0.633) ◎ (24s)
13
14
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
13
14