3
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

kerasに入門してTensorBoardで可視化したい(できてない) [WIP]

Last updated at Posted at 2017-09-03

まだTesorBoardで可視化出来ていない上に作成途中ですが、長くなってきましたので…。

TensorFlowのアップデート

TensorFlowにKerasが同梱されてからしばらく経ったのでアップデートしようとしました。結果、できませんでした。

顛末は長くなりましたのでこちらの記事をご参照ください。

Keras入門

今回はTensorFlow付属のKerasを使っていきます。もちろん、Kerasを普通にインストールしても良いと思います。

なぜKerasを使おうとしているのか

TensorFlowを素で使うのは時に辛い場合があります。MNISTで隠れ層2層のNeural Networkを以前に定義したとき、こんなコードを書きました。

# vanilla TensorFlow (v1.0)
batch_size = 128
hidden_layer_size = 1024
input_layer_size = 28*28
output_layer_size = 10

graph = tf.Graph()
with graph.as_default():
    # Input data. For the training data, we use a placeholder that will be fed
    # at run time with a training minibatch.
    tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size))
    tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
    tf_valid_dataset = tf.constant(valid_dataset)
    tf_test_dataset = tf.constant(test_dataset)
    
    # Variables
    weight1 = weight_variable( (input_layer_size, hidden_layer_size) )
    bias1 = bias_variable( [hidden_layer_size] )
    
    # Hidden Layer
    hidden_layer = tf.nn.relu(tf.matmul(tf_train_dataset, weight1) + bias1)
    
    # Variables
    weight2 = weight_variable( (hidden_layer_size, output_layer_size) )
    bias2 = bias_variable( [output_layer_size] )
    
    logits = tf.matmul(hidden_layer, weight2) + bias2
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))
    
    # optimizer
    optimizer = tf.train.AdamOptimizer(0.001).minimize(loss)
    # optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
    
    # prediction
    train_prediction = tf.nn.softmax(logits)
    valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weight1) + bias1), weight2) + bias2)
    test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weight1) + bias1), weight2) + bias2)

まず目につくのはコード量の多さだと思います。こちらについては、tf.layerが導入されたことにより大分解決されました。tf.layerを用いると次のようになります。

# tf.layer
def nn_model(images, drop_rate, variable_scope, reuse=False): 
    with tf.variable_scope(variable_scope, reuse=reuse):
        net = tf.layers.dense(images, 512, activation=tf.nn.relu) # Dense Layer (1st hidden layer)
        net = tf.layers.dropout(net, rate=drop_rate)
        net = tf.layers.dense(net, 512, activation=tf.nn.relu) # Dense Layer (2nd hidden layer)
        net = tf.layers.dropout(net, rate=drop_rate)
        net = tf.layers.dense(net, 10, activation=tf.nn.relu) # Readout Layer
        
    return net

x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])
keep_prob = tf.placeholder(tf.float32)

drop_rate = 1 - keep_prob
mlp_pred = nn_model(x, drop_rate, variable_scope='mlp')
loss = tf.losses.softmax_cross_entropy(y, mlp_pred)
train_step = tf.train.AdamOptimizer().minimize(loss)

correct_prediction = tf.equal(tf.argmax(mlp_pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

かなりコード量が減っていることがわかります。一方、これでも不十分だと感じるケースがあります。
例えば上記のコードで次のようなタイプミスをした場合、エラーの原因が分かりにくいです。

# 
def nn_model(images, drop_rate, variable_scope, reuse=False): 
    with tf.variable_scope(variable_scope, reuse=reuse):
        net = tf.layers.dense(images, 512, activation=tf.nn.relu) # Dense Layer (1st hidden layer)
        net = tf.layers.dropout(net, rate=drop_rate)
        net = tf.layers.dense(net, 512, activation=tf.nn.relu) # Dense Layer (2nd hidden layer)
        net = tf.layers.dropout(net, rate=drop_rate)
        # Wrong, '512' should be '10'
        net = tf.layers.dense(net, 512, activation=tf.nn.relu) # Readout Layer

この場合のエラー出力は次のようになります。どこを直せばいいのか大変分かりにくいです。

ValueErrorTraceback (most recent call last)
<ipython-input-36-96111e522d26> in <module>()
     15 drop_rate = 1 - keep_prob
     16 mlp_pred = nn_model(x, drop_rate, variable_scope='mlp2')
---> 17 loss = tf.losses.softmax_cross_entropy(y, mlp_pred)
     18 train_step = tf.train.AdamOptimizer().minimize(loss)
     19 

/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/losses/losses_impl.py in softmax_cross_entropy(onehot_labels, logits, weights, label_smoothing, scope, loss_collection, reduction)
    631     logits = ops.convert_to_tensor(logits)
    632     onehot_labels = math_ops.cast(onehot_labels, logits.dtype)
--> 633     logits.get_shape().assert_is_compatible_with(onehot_labels.get_shape())
    634 
    635     if label_smoothing > 0:

/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/tensor_shape.py in assert_is_compatible_with(self, other)
    735     """
    736     if not self.is_compatible_with(other):
--> 737       raise ValueError("Shapes %s and %s are incompatible" % (self, other))
    738 
    739   def most_specific_compatible_shape(self, other):

ValueError: Shapes (?, 512) and (?, 10) are incompatible

エラーの原因はShapeが正しく設定できていないからなのですが、そもそもShapeは自動的に推定できるはずです。それをエラーのないように人手で設定するのは大変つらい作業です(過去2敗)。正直なんとかして欲しい。

また、トレーニング用に次のようなコードを書いたとします。

# this code dosen't work well
num_steps = 3000

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    for step in range(num_steps):
        # make batch datasets with size 128
        offset = (step * batch_size) % (X_train.shape[0] - batch_size)

        # Generate a minibatch.
        X_batch = X_train[offset:(offset + batch_size), :]
        y_batch = y_train[offset:(offset + batch_size), :]
        
        if step % 100 == 0:
            train_accuracy = accuracy.eval(feed_dict={x : X_batch, y: y_batch, keep_prob: 1.0})
            print('step %d, training accuracy %g' % (step, train_accuracy))
            
        train_step.run(feed_dict={x : X_batch, y: y_batch, keep_prob: 0.5})
    
    print('test accuracy %g' % accuracy.eval(feed_dict={x : X_test, y : y_test, keep_prob: 1.0}))

実行結果は次のようになりました。

step 0, training accuracy 0.0625
step 100, training accuracy 0.359375
step 200, training accuracy 0.460938
step 300, training accuracy 0.523438
	:
	:
step 2800, training accuracy 0.578125
step 2900, training accuracy 0.617188
test accuracy 0.585

結果は正直よくないのですが、どこを改善すればいいのか、被疑箇所が多すぎます…。また、こんなコードでも書くのにはそれなりに時間がかかります。比較のために今回書いてみましたが、結構時間がかかりました。

Kerasで書き直し

このあたりをKerasを使ってみるとこうなります。

# Keras

# Model
model = Sequential([
    Dense(512, input_shape=(X_train.shape[1], )), # Dense Layer (1st hidden layer)
    Activation('relu'),
    Dropout(0.2),
    Dense(512), # Dense Layer (2nd hidden layer)
    Activation('relu'),
    Dropout(0.2),
    Dense(10), # Readout Layer
    Activation('softmax') ])

# Train
model.compile(loss="categorical_crossentropy", optimizer=Adam(), metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=batch_size, epochs=nb_epoch, verbose=1, validation_data=(X_test, y_test))

# Inference
y_estimated = model.predict_classes(X_test)

たったこれだけのコードでこれだけの結果が出ます。

Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 7s - loss: 0.2512 - acc: 0.9250 - val_loss: 0.1101 - val_acc: 0.9659
Epoch 2/20
60000/60000 [==============================] - 7s - loss: 0.1010 - acc: 0.9695 - val_loss: 0.0786 - val_acc: 0.9753
	:
	:
Epoch 20/20
60000/60000 [==============================] - 13s - loss: 0.0162 - acc: 0.9946 - val_loss: 0.0809 - val_acc: 0.9818

これはすごい。

今更Kerasに着手した理由

KerasがTensorFlowに付属しているからです。階層が大分深い上に、場所はまだまだ変更されていく模様ですが、次のようにimportできます。

# loading keras included tensorflow v1.3
from tensorflow.contrib.keras.api.keras.datasets import mnist
from tensorflow.contrib.keras.api.keras.models import Sequential
from tensorflow.contrib.keras.api.keras.layers import Dense, Dropout, Activation
from tensorflow.contrib.keras.api.keras.optimizers import Adam
from tensorflow.contrib.keras.api.keras.utils import to_categorical
from tensorflow.contrib.keras.api.keras import backend as K

そしてDocumentもTensorFlowの公式サイトで見られます。詳細はKeras公式を見てねって書いてありますね。

スクリーンショット 2017-09-03 18.25.26.png

つまりこれですね。

Keras on TensorBoard

TensorFlowに付属しているということはTensorBoardにも対応しているはずです。Callbackを使えばうまくいくはずです!

試してみます。

# Doesn't work
# build model with names

old_session = K.get_session()

with tf.Graph().as_default():
    with tf.Session() as session:
        K.set_session(session)
        K.set_learning_phase(1)
        model = Sequential()
        model.add(Dense(512, input_shape=(X_train.shape[1], ), name="Dense1")) # Dense Layer (1st hidden layer) 
        model.add(Activation('relu', name="Relu1"))
        model.add(Dropout(0.2, name="Dropout1"))
        model.add(Dense(512, name="Dense2")) # Dense Layer (2nd hidden layer)
        model.add(Activation('relu', name="Relu2"))
        model.add(Dropout(0.2, name="Dropout2"))
        model.add(Dense(10, name="Dense3")) # Readout Layer
        model.add(Activation('softmax', name="Softmax")) 

        model.summary()
        model.compile(loss="categorical_crossentropy", optimizer=Adam(), metrics=['accuracy'])

        tensorboard_callback = TensorBoard(log_dir=log_dir, histogram_freq=1)
        callbacks = [tensorboard_callback]

        history = model.fit(X_train,
                            y_train, 
                            batch_size=batch_size, 
                            epochs=nb_epoch, verbose=1,
                            validation_data=(X_test, y_test),
                            callbacks=callbacks
                           )

        score = model.evaluate(X_test, y_test, verbose=0)
        print('Test score', score[0])
        print('Test accuracy', score[1])

K.set_session(old_session)

実行結果はこうなってます。

TypeErrorTraceback (most recent call last)
<ipython-input-15-eb62e6893019> in <module>()
     10         model.add(Dense(512, input_shape=(X_train.shape[1], ), name="Dense1")) # Dense Layer (1st hidden layer)
     11         model.add(Activation('relu', name="Relu1"))
---> 12         model.add(Dropout(0.2, name="Dropout1"))
     13         model.add(Dense(512, name="Dense2")) # Dense Layer (2nd hidden layer)
     14         model.add(Activation('relu', name="Relu2"))

/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/keras/python/keras/models.py in add(self, layer)
    507           output_masks=[None])
    508     else:
--> 509       output_tensor = layer(self.outputs[0])
    510       if isinstance(output_tensor, list):
    511         raise TypeError('All layers in a Sequential model '

/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/keras/python/keras/engine/topology.py in __call__(self, inputs, **kwargs)
    394 
    395     # Actually call the layer (optionally building it).
--> 396     output = super(Layer, self).__call__(inputs, **kwargs)
    397 
    398     # Handle mask computation.

/usr/local/lib/python3.5/dist-packages/tensorflow/python/layers/base.py in __call__(self, inputs, *args, **kwargs)
    448         # Check input assumptions set after layer building, e.g. input shape.
    449         self._assert_input_compatibility(inputs)
--> 450         outputs = self.call(inputs, *args, **kwargs)
    451 
    452         # Apply activity regularization.

/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/keras/python/keras/layers/core.py in call(self, inputs, training)
    113     if training is None:
    114       training = K.learning_phase()
--> 115     output = super(Dropout, self).call(inputs, training=training)
    116     if training is K.learning_phase():
    117       output._uses_learning_phase = True  # pylint: disable=protected-access

/usr/local/lib/python3.5/dist-packages/tensorflow/python/layers/core.py in call(self, inputs, training)
    262     return utils.smart_cond(training,
    263                             dropped_inputs,
--> 264                             lambda: array_ops.identity(inputs))
    265 
    266 

/usr/local/lib/python3.5/dist-packages/tensorflow/python/layers/utils.py in smart_cond(pred, fn1, fn2, name)
    206     raise TypeError('`fn2` must be callable.')
    207 
--> 208   pred_value = constant_value(pred)
    209   if pred_value is not None:
    210     if pred_value:

/usr/local/lib/python3.5/dist-packages/tensorflow/python/layers/utils.py in constant_value(pred)
    236     pred_value = tensor_util.constant_value(pred)
    237   else:
--> 238     raise TypeError('`pred` must be a Tensor, a Variable, or a Python bool.')
    239   return pred_value

TypeError: `pred` must be a Tensor, a Variable, or a Python bool.

ダメでした(未解決)このあたり見るとできそうなんですけど…。

もっと仕様理解した上で挑んでみます。あとはPythonのデバッグ方法を知りたい…!

おまけ

たまたま見つけたStackoverflowの「Scipyってなんて略す?」の議論が面白かったです。

Reference

3
3
2

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
3
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?