239
214

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

[TF]KerasでModelとParameterをLoad/Saveする方法

Last updated at Posted at 2016-04-23

Modelの保存&読み込み

構築したModelは、json file formatかyaml file formatでテキストとして保存できます。
保存したファイルを読み込んでModelを再構築することも可能です。

保存は、model.to_json()/**model.to_yaml()**を使用します。ここでmodelは自分で構築したmodelになります。
model.to_json()/model.to_yaml()は文字列が返されるので自分でファイルに保存する必要があります。

読み込むときはmodel_from_json()/**model_from_yaml()**を使用します。
こちらも引数に文字列を渡す必要があり、ファイルの読み込みは自分で読み込まなければなりません。

保存と読み込み(json)

json_string = model.to_json()

model = model_from_json(json_string)

Parameterの保存&読み込み

学習したParameterの保存&読み込みは、save_weights/load_weightsを使用します。

model.save_weights('param.hdf5')

model.load_weights('param.hdf5')

学習途中のparameterを保存するためにはCallbackを使用します。
使用するCallbackはModelCheckpointです。
callbackは毎epochの終わりで呼ばれます。

Arguments

arguments description
filepath 保存ファイル名
monitor チェックする値を指定する。例えば、monitor='val_loss'
verbose 保存時に標準出力にコメントを出すか指定
save_best_only 精度がよくなった時だけ保存するかどうか指定。Falseの場合は毎epoch保存される。
mode チェックしている変数の状態がどうなったら保存するかを指定(例えば精度ならば大きくなったら保存したいのでmaxを指定し、lossの場合は逆なのでminを指定する。autoをしていると名前から判断する。

filepathが同じ名前だと上書きされてしまうので、名前を変えるために指定した変数の値を自動で入れてくれます。
指定できる変数は、epoch, loss, acc, val_loss, val_accです。

例えばfilepathを下記のように指定すると、その時の値を自動で入れてくれます。
filepath = 'weights.{epoch:02d}-{loss:.2f}-{acc:.2f}-{val_loss:.2f}-{val_acc:.2f}.hdf5'

コード例

Modelを構築して学習

下記例は、Modelを構築し、学習中のWeightsをCallbackで保存して、最後にModelとWeightを保存するというものです。

import numpy as np
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.utils import np_utils
from keras.optimizers import Adam
import keras.callbacks
import keras.backend.tensorflow_backend as KTF
import tensorflow as tf
import os.path

batch_size = 128
nb_classes = 10
nb_epoch = 20

img_rows = 28
img_cols = 28

f_log = './log'
f_model = './model'
(X_train, y_train), (X_test, y_test) = mnist.load_data()

X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test  = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test  = X_test.astype('float32')
X_train /= 255
X_test /= 255

Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)

old_session = KTF.get_session()

with tf.Graph().as_default():
    session = tf.Session('')
    KTF.set_session(session)

    model = Sequential()

    model.add(Convolution2D(32, 3, 3, border_mode = 'valid', input_shape=(1, img_rows, img_cols)))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Convolution2D(64, 3, 3, border_mode = 'valid'))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Flatten())
    model.add(Dense(128))
    model.add(Activation('relu'))
    model.add(Dense(10))
    model.add(Activation('softmax'))

    model.summary()

    model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.001, beta_1=0.5), metrics=['accuracy'])
    
    tb_cb = keras.callbacks.TensorBoard(log_dir=f_log, histogram_freq=1)
    cp_cb = keras.callbacks.ModelCheckpoint(filepath = os.path.join(f_model,'cnn_model{epoch:02d}-loss{loss:.2f}-acc{acc:.2f}-vloss{val_loss:.2f}-vacc{val_acc:.2f}.hdf5'), monitor='val_loss', verbose=1, save_best_only=True, mode='auto')
    cbks = [tb_cb, cp_cb]

    history = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, callbacks=cbks, validation_data=(X_test, Y_test))
    score = model.evaluate(X_test, Y_test, verbose=0)
    print('Test score:', score[0])
    print('Test accuracy:', score[1])

    print('save the architecture of a model')
    json_string = model.to_json()
    open(os.path.join(f_model,'cnn_model.json'), 'w').write(json_string)
    yaml_string = model.to_yaml()
    open(os.path.join(f_model,'cnn_model.yaml'), 'w').write(yaml_string)
    print('save weights')
    model.save_weights(os.path.join(f_model,'cnn_model_weights.hdf5'))
KTF.set_session(old_session)

実行結果はこんな感じです。

____________________________________________________________________________________________________
Layer (type)                       Output Shape        Param #     Connected to                     
====================================================================================================
convolution2d_1 (Convolution2D)    (None, 32, 26, 26)  320         convolution2d_input_1[0][0]      
____________________________________________________________________________________________________
activation_1 (Activation)          (None, 32, 26, 26)  0           convolution2d_1[0][0]            
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D)      (None, 32, 13, 13)  0           activation_1[0][0]               
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D)    (None, 64, 11, 11)  18496       maxpooling2d_1[0][0]             
____________________________________________________________________________________________________
activation_2 (Activation)          (None, 64, 11, 11)  0           convolution2d_2[0][0]            
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D)      (None, 64, 5, 5)    0           activation_2[0][0]               
____________________________________________________________________________________________________
flatten_1 (Flatten)                (None, 1600)        0           maxpooling2d_2[0][0]             
____________________________________________________________________________________________________
dense_1 (Dense)                    (None, 128)         204928      flatten_1[0][0]                  
____________________________________________________________________________________________________
activation_3 (Activation)          (None, 128)         0           dense_1[0][0]                    
____________________________________________________________________________________________________
dense_2 (Dense)                    (None, 10)          1290        activation_3[0][0]               
____________________________________________________________________________________________________
activation_4 (Activation)          (None, 10)          0           dense_2[0][0]                    
====================================================================================================
Total params: 225034
____________________________________________________________________________________________________
Train on 60000 samples, validate on 10000 samples
Epoch 1/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.1610 - acc: 0.9524Epoch 00000: val_loss improved from inf to 0.05515, saving model to ./model/cnn_model00-loss0.16-acc0.95-vloss0.06-vacc0.98.hdf5
60000/60000 [==============================] - 24s - loss: 0.1609 - acc: 0.9525 - val_loss: 0.0552 - val_acc: 0.9830
Epoch 2/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0478 - acc: 0.9853Epoch 00001: val_loss improved from 0.05515 to 0.04094, saving model to ./model/cnn_model01-loss0.05-acc0.99-vloss0.04-vacc0.99.hdf5
60000/60000 [==============================] - 24s - loss: 0.0477 - acc: 0.9853 - val_loss: 0.0409 - val_acc: 0.9871
Epoch 3/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0338 - acc: 0.9891Epoch 00002: val_loss improved from 0.04094 to 0.03424, saving model to ./model/cnn_model02-loss0.03-acc0.99-vloss0.03-vacc0.99.hdf5
60000/60000 [==============================] - 23s - loss: 0.0338 - acc: 0.9891 - val_loss: 0.0342 - val_acc: 0.9890
Epoch 4/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0248 - acc: 0.9918Epoch 00003: val_loss improved from 0.03424 to 0.02830, saving model to ./model/cnn_model03-loss0.02-acc0.99-vloss0.03-vacc0.99.hdf5
60000/60000 [==============================] - 23s - loss: 0.0248 - acc: 0.9918 - val_loss: 0.0283 - val_acc: 0.9898
Epoch 5/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0192 - acc: 0.9940Epoch 00004: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0192 - acc: 0.9940 - val_loss: 0.0286 - val_acc: 0.9908
Epoch 6/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0145 - acc: 0.9954Epoch 00005: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0145 - acc: 0.9954 - val_loss: 0.0300 - val_acc: 0.9914
Epoch 7/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0119 - acc: 0.9961Epoch 00006: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0119 - acc: 0.9961 - val_loss: 0.0396 - val_acc: 0.9881
Epoch 8/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0097 - acc: 0.9968Epoch 00007: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0097 - acc: 0.9969 - val_loss: 0.0302 - val_acc: 0.9901
Epoch 9/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0075 - acc: 0.9977Epoch 00008: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0075 - acc: 0.9976 - val_loss: 0.0400 - val_acc: 0.9877
Epoch 10/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0081 - acc: 0.9973Epoch 00009: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0081 - acc: 0.9972 - val_loss: 0.0352 - val_acc: 0.9905
Epoch 11/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0058 - acc: 0.9979Epoch 00010: val_loss did not improve
60000/60000 [==============================] - 24s - loss: 0.0058 - acc: 0.9979 - val_loss: 0.0359 - val_acc: 0.9912
Epoch 12/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0056 - acc: 0.9981Epoch 00011: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0056 - acc: 0.9981 - val_loss: 0.0346 - val_acc: 0.9915
Epoch 13/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0055 - acc: 0.9983Epoch 00012: val_loss improved from 0.02830 to 0.02716, saving model to ./model/cnn_model12-loss0.01-acc1.00-vloss0.03-vacc0.99.hdf5
60000/60000 [==============================] - 23s - loss: 0.0055 - acc: 0.9983 - val_loss: 0.0272 - val_acc: 0.9926
Epoch 14/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0029 - acc: 0.9992Epoch 00013: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0029 - acc: 0.9992 - val_loss: 0.0365 - val_acc: 0.9917
Epoch 15/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0052 - acc: 0.9983Epoch 00014: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0052 - acc: 0.9983 - val_loss: 0.0357 - val_acc: 0.9916
Epoch 16/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0047 - acc: 0.9986Epoch 00015: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0047 - acc: 0.9987 - val_loss: 0.0311 - val_acc: 0.9922
Epoch 17/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0038 - acc: 0.9988Epoch 00016: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0038 - acc: 0.9988 - val_loss: 0.0424 - val_acc: 0.9905
Epoch 18/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0040 - acc: 0.9986Epoch 00017: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0040 - acc: 0.9986 - val_loss: 0.0382 - val_acc: 0.9922
Epoch 19/20
59904/60000 [============================>.] - ETA: 0s - loss: 5.6678e-04 - acc: 0.9999Epoch 00018: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 5.6587e-04 - acc: 1.0000 - val_loss: 0.0379 - val_acc: 0.9926
Epoch 20/20
59904/60000 [============================>.] - ETA: 0s - loss: 3.6203e-04 - acc: 1.0000Epoch 00019: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 3.6146e-04 - acc: 1.0000 - val_loss: 0.0379 - val_acc: 0.9930
('Test score:', 0.037918671134550642)
('Test accuracy:', 0.99299999999999999)
save the architecture of a model
save weights

ModelとWeightをファイルから読み込む

import numpy as np
from keras.datasets import mnist
from keras.models import model_from_json
from keras.utils import np_utils
from keras.optimizers import Adam
import keras.callbacks
import keras.backend.tensorflow_backend as KTF
import tensorflow as tf
import os.path

batch_size = 128
nb_classes = 10
nb_epoch = 3

img_rows = 28
img_cols = 28

f_log = './log'
f_model = './model'
model_filename = 'cnn_model.json'
weights_filename = 'cnn_model_weights.hdf5'

(X_train, y_train), (X_test, y_test) = mnist.load_data()

X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test  = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test  = X_test.astype('float32')
X_train /= 255
X_test /= 255

Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)

old_session = KTF.get_session()

with tf.Graph().as_default():
    session = tf.Session('')
    KTF.set_session(session)

    json_string = open(os.path.join(f_model, model_filename)).read()
    model = model_from_json(json_string)

    model.summary()

    model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.001, beta_1=0.5), metrics=['accuracy'])

    model.load_weights(os.path.join(f_model,weights_filename))

    cbks = []

    history = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, callbacks=cbks, validation_data=(X_test, Y_test))
    score = model.evaluate(X_test, Y_test, verbose=0)
    print('Test score:', score[0])
    print('Test accuracy:', score[1])
    
KTF.set_session(old_session)

下記が実行結果です。学習済みのParameterを読み込んで学習しているので最初から精度が高くなっています。

____________________________________________________________________________________________________
Layer (type)                       Output Shape        Param #     Connected to                     
====================================================================================================
convolution2d_1 (Convolution2D)    (None, 32, 26, 26)  320         convolution2d_input_1[0][0]      
____________________________________________________________________________________________________
activation_1 (Activation)          (None, 32, 26, 26)  0           convolution2d_1[0][0]            
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D)      (None, 32, 13, 13)  0           activation_1[0][0]               
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D)    (None, 64, 11, 11)  18496       maxpooling2d_1[0][0]             
____________________________________________________________________________________________________
activation_2 (Activation)          (None, 64, 11, 11)  0           convolution2d_2[0][0]            
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D)      (None, 64, 5, 5)    0           activation_2[0][0]               
____________________________________________________________________________________________________
flatten_1 (Flatten)                (None, 1600)        0           maxpooling2d_2[0][0]             
____________________________________________________________________________________________________
dense_1 (Dense)                    (None, 128)         204928      flatten_1[0][0]                  
____________________________________________________________________________________________________
activation_3 (Activation)          (None, 128)         0           dense_1[0][0]                    
____________________________________________________________________________________________________
dense_2 (Dense)                    (None, 10)          1290        activation_3[0][0]               
____________________________________________________________________________________________________
activation_4 (Activation)          (None, 10)          0           dense_2[0][0]                    
====================================================================================================
Total params: 225034
____________________________________________________________________________________________________
Train on 60000 samples, validate on 10000 samples
Epoch 1/3
60000/60000 [==============================] - 17s - loss: 0.0024 - acc: 0.9993 - val_loss: 0.0442 - val_acc: 0.9916
Epoch 2/3
60000/60000 [==============================] - 17s - loss: 0.0049 - acc: 0.9986 - val_loss: 0.0401 - val_acc: 0.9914
Epoch 3/3
60000/60000 [==============================] - 17s - loss: 0.0031 - acc: 0.9991 - val_loss: 0.0424 - val_acc: 0.9912
('Test score:', 0.042397765618482915)
('Test accuracy:', 0.99119999999999997)

自前のCallbackにより保存の頻度を設定できるようにした例

Callbackを作って、Weightの保存頻度を設定できるようにしてみます。
Callbackを作るには、keras.callbacks.Callbackを継承して作ります。
CallbackがCallされるタイミングは決まっていて、それに対応するメソッド名も決まっているので変更したい部分をOverwriteするだけです。
必要であれば継承元のものをCallしても良いです。

method description
on_epoch_begin epochの開始時にCallされます。
on_epoch_end epochの終了時にCallされます。
on_batch_begin batchの開始時にCallされます。
on_batch_end batchの終了時にCallされます。
on_train_begin 学習の最初にCallされます。
on_train_end 学習の最後にCallされます。
_set_params 学習開始時にCallされ、引数にModel情報が渡されます。あまり使いません。TensorBoardのCallbackではこの時にhistogram_summaryをCallしています。

callbackは下記のようになります。最初にCall頻度を受け取って、on_epoch_endで頻度にしたがって継承元のon_epoch_endをCallしているだけです。

class ModelCheckpointEx(keras.callbacks.ModelCheckpoint):
    def __init__(self, filepath, verbose=0, save_freq=1):
        super(ModelCheckpointEx, self).__init__(filepath, verbose=verbose)
        self.save_freq = save_freq
        
    def on_epoch_end(self, epoch, logs={}):
        if epoch % self.save_freq == 0:
            super(ModelCheckpointEx, self).on_epoch_end(epoch, logs=logs)

使い方は前と一緒なのでコードは割愛します。実行結果は下記のようになります。2回に1回Logが出力されているのがわかります。

Train on 60000 samples, validate on 10000 samples
Epoch 1/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.1646 - acc: 0.9498Epoch 00000: saving model to ./model/cnn_model00-loss0.16-acc0.95-vloss0.06-vacc0.98.hdf5
60000/60000 [==============================] - 16s - loss: 0.1645 - acc: 0.9499 - val_loss: 0.0613 - val_acc: 0.9820
Epoch 2/20
60000/60000 [==============================] - 16s - loss: 0.0524 - acc: 0.9833 - val_loss: 0.0396 - val_acc: 0.9867
Epoch 3/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0361 - acc: 0.9889Epoch 00002: saving model to ./model/cnn_model02-loss0.04-acc0.99-vloss0.04-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0361 - acc: 0.9889 - val_loss: 0.0353 - val_acc: 0.9877
Epoch 4/20
60000/60000 [==============================] - 16s - loss: 0.0269 - acc: 0.9913 - val_loss: 0.0306 - val_acc: 0.9900
Epoch 5/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0203 - acc: 0.9937Epoch 00004: saving model to ./model/cnn_model04-loss0.02-acc0.99-vloss0.04-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0203 - acc: 0.9937 - val_loss: 0.0422 - val_acc: 0.9871
Epoch 6/20
60000/60000 [==============================] - 16s - loss: 0.0174 - acc: 0.9942 - val_loss: 0.0315 - val_acc: 0.9893
Epoch 7/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0119 - acc: 0.9962Epoch 00006: saving model to ./model/cnn_model06-loss0.01-acc1.00-vloss0.03-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0119 - acc: 0.9962 - val_loss: 0.0329 - val_acc: 0.9901
Epoch 8/20
60000/60000 [==============================] - 16s - loss: 0.0100 - acc: 0.9967 - val_loss: 0.0337 - val_acc: 0.9881
Epoch 9/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0082 - acc: 0.9975Epoch 00008: saving model to ./model/cnn_model08-loss0.01-acc1.00-vloss0.04-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0081 - acc: 0.9975 - val_loss: 0.0360 - val_acc: 0.9890
Epoch 10/20
60000/60000 [==============================] - 16s - loss: 0.0082 - acc: 0.9974 - val_loss: 0.0305 - val_acc: 0.9911
Epoch 11/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0071 - acc: 0.9977Epoch 00010: saving model to ./model/cnn_model10-loss0.01-acc1.00-vloss0.03-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0071 - acc: 0.9977 - val_loss: 0.0348 - val_acc: 0.9903
Epoch 12/20
60000/60000 [==============================] - 16s - loss: 0.0073 - acc: 0.9974 - val_loss: 0.0313 - val_acc: 0.9913
Epoch 13/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0040 - acc: 0.9988Epoch 00012: saving model to ./model/cnn_model12-loss0.00-acc1.00-vloss0.03-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0040 - acc: 0.9988 - val_loss: 0.0348 - val_acc: 0.9915
Epoch 14/20
60000/60000 [==============================] - 16s - loss: 0.0027 - acc: 0.9993 - val_loss: 0.0457 - val_acc: 0.9891
Epoch 15/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0067 - acc: 0.9977Epoch 00014: saving model to ./model/cnn_model14-loss0.01-acc1.00-vloss0.05-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0066 - acc: 0.9977 - val_loss: 0.0483 - val_acc: 0.9891
Epoch 16/20
60000/60000 [==============================] - 16s - loss: 0.0046 - acc: 0.9984 - val_loss: 0.0358 - val_acc: 0.9902
Epoch 17/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0022 - acc: 0.9994Epoch 00016: saving model to ./model/cnn_model16-loss0.00-acc1.00-vloss0.04-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0022 - acc: 0.9995 - val_loss: 0.0394 - val_acc: 0.9910
Epoch 18/20
60000/60000 [==============================] - 16s - loss: 0.0046 - acc: 0.9986 - val_loss: 0.0398 - val_acc: 0.9895
Epoch 19/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0029 - acc: 0.9992Epoch 00018: saving model to ./model/cnn_model18-loss0.00-acc1.00-vloss0.04-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0029 - acc: 0.9992 - val_loss: 0.0419 - val_acc: 0.9915
Epoch 20/20
60000/60000 [==============================] - 16s - loss: 0.0027 - acc: 0.9992 - val_loss: 0.0368 - val_acc: 0.9911
('Test score:', 0.036752203683738938)
('Test accuracy:', 0.99109999999999998)
239
214
1

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
239
214

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?