Help us understand the problem. What is going on with this article?

kerasのpretrained modelをtensorflowで利用する

More than 1 year has passed since last update.

はじめに

kerasではVGGなどのpretrained modelを簡単に利用できます。
一方、tensorflowにはpretrained modelが含まれていないため、
ネットワーク定義やweightをどこかから入手してくる必要があり、面倒です。
(TFLearnやTF-Slimには含まれているようですが、tensorflowのラッパーはkerasだけでお腹いっぱいです)

そこで本記事では、kerasのpretrained modelをtensorflowで利用してfinetuningする方法を紹介します。

参考ページ

http://zachmoshe.com/2017/11/11/use-keras-models-with-tf.html
https://github.com/JihongJu/keras-fcn/blob/master/keras_fcn/metrics.py

pretrained modelをkerasのみで利用する

まずは、pretrained modelをkerasのみで利用するコードを紹介します。
このコードについてはネット上にいくらでも情報が落ちているので、説明は省略します。

from tensorflow.python.keras.datasets import cifar10
from tensorflow.python.keras.applications.vgg16 import VGG16
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers import Dense
from tensorflow.python.keras import optimizers

# param
hidden_units = 1024
output_units = 10

# data
(x_train, t_train), (x_test, t_test) = cifar10.load_data()
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0
t_train = t_train.flatten()
t_test = t_test.flatten()

# model
base_model = VGG16(weights='imagenet',
                   include_top=False,
                   pooling='avg')
h = base_model.output
h = Dense(hidden_units, activation='relu')(h)
y = Dense(output_units, activation='softmax')(h)
model = Model(inputs=base_model.input, outputs=y)

# trainable variables
for layer in base_model.layers:
    layer.trainable = False

# loss and optimizer
model.compile(loss = 'sparse_categorical_crossentropy',
              optimizer = optimizers.SGD(lr=0.01),
              metrics=['accuracy'])

# training
model.fit(x_train, t_train, epochs=5, batch_size=50)

pretrained modelをtensorflowで利用する

次に、本題のpretrained modelをtensorflowで利用するコードを紹介します。

from tensorflow.python.keras.datasets import cifar10
from tensorflow.python.keras.applications.vgg16 import VGG16
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers import Dense
from tensorflow.python.keras import backend as K
import numpy as np
import tensorflow as tf

# param
hidden_units = 1024
output_units = 10

# data
(x_train, t_train), (x_test, t_test) = cifar10.load_data()
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0
t_train = t_train.flatten()
t_test = t_test.flatten()

# model
base_model = VGG16(weights='imagenet',
                   include_top=False,
                   pooling='avg')

x = tf.placeholder(tf.float32, [None, x_train.shape[1], x_train.shape[2], x_train.shape[3]])
t = tf.placeholder(tf.int64, [None, ])

h = base_model(x)
h = tf.layers.dense(h, hidden_units, activation='relu')
y = tf.layers.dense(h, output_units)

# trainable & uninitialized variables
uninitialized_variables = [v for v in tf.global_variables() \
    if not hasattr(v, '_keras_initialized') or not v._keras_initialized]

# loss and optimizer
ops = dict()
ops['loss'] = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=t, logits=y))
correct = tf.equal(tf.argmax(y, 1), t)
ops['accuracy'] = tf.reduce_mean(tf.cast(correct, "float"))
opt = tf.train.GradientDescentOptimizer(learning_rate=0.01)
ops['train_step'] = opt.minimize(ops['loss'], var_list=uninitialized_variables)

# initialization
sess = K.get_session()
sess.run(tf.variables_initializer(uninitialized_variables))

# training
epochs = 5
batch_size = 50
data_num = x_train.shape[0]
for epoch in range(epochs):
    loss_sum = 0
    accuracy_sum = 0
    shuffle_idx = np.random.permutation(data_num)
    for idx in range(0, data_num, batch_size):
        x_batch = x_train[shuffle_idx[
            idx : idx + batch_size if idx + batch_size < data_num else data_num]]
        t_batch = t_train[shuffle_idx[
            idx : idx + batch_size if idx + batch_size < data_num else data_num]]
        ops_value = sess.run(ops, feed_dict = {x:x_batch, t:t_batch})
        loss_sum += ops_value['loss'] * x_batch.shape[0]
        accuracy_sum += ops_value['accuracy'] * x_batch.shape[0]
        print('\repoch : {} {}/{} train_loss : {} train_accuracy : {}'.format( \
              epoch, idx, data_num, loss_sum / (idx + x_batch.shape[0]), accuracy_sum / (idx + x_batch.shape[0])), end="")
    print()

それでは、肝となる部分を解説していきます。

trainable & uninitialized variables

uninitialized_variables = \
    [v for v in tf.global_variables() if not hasattr(v, '_keras_initialized') or not v._keras_initialized]

この部分では、VGGのpretrainedな層以外のvariablesを抽出します。

trainable variablesを指定して最適化

ops['train_step'] = opt.minimize(ops['loss'], var_list=uninitialized_variables)

上記抽出したvariablesをtrainable variablesとして指定して最適化を行います。

uninitialized variablesのみを初期化

sess.run(tf.variables_initializer(uninitialized_variables))

抽出したvariablesのみを初期化します。
初期化によく使われるglobal_variables_initializer()で初期化すると、pretrained weightも初期化されてしまいます。

注意点

kerasはtensorflowに統合されたもの(tensorflow.python.keras)を利用した方が良いです。
tensorflowとは別にインストールしたkerasだとバージョンの組み合わせによっては上記コードが動かないようです。

おわりに

上記の方法でkerasのpretrained modelをtensorflowで利用することができます。
kerasがtensorflowに統合されたことですし、keras+tensorflowでコーディングを簡略化していきたいですね。

Why not register and get more from Qiita?
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
Comments
No comments
Sign up for free and join this conversation.
If you already have a Qiita account
Why do not you register as a user and use Qiita more conveniently?
You need to log in to use this function. Qiita can be used more conveniently after logging in.
You seem to be reading articles frequently this month. Qiita can be used more conveniently after logging in.
  1. We will deliver articles that match you
    By following users and tags, you can catch up information on technical fields that you are interested in as a whole
  2. you can read useful information later efficiently
    By "stocking" the articles you like, you can search right away
ユーザーは見つかりませんでした