4
4

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

VGG16でクラス分類

Last updated at Posted at 2018-11-15

目的

  • Kerasの習得
  • ニューラルネットワークのさらなる理解
  • Keras学習済みモデルのVGG16を使ったクラス分類

概要

データセット:ImageNet
ネットワーク:VGG16
実行環境:Google Colaboratory(GPU)

ImageNet
1400万画像、2万クラスのデータセット

VGG16
ImageNetから抽出された画像(1000クラス)で学習したモデル
全結合層×3、畳み込み層×13の16層ニューラルネットワーク

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = {'png', 'retina'}
import os
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input, decode_predictions
import keras.preprocessing.image as Image
from keras.models import Model

VGG16の読み込み

  • include_top:ネットワークの出力層側にある3つの全結合層を含むか(既定値:True)
    Trueの場合、入力サイズは224x224 20200328訂正
  • weights:ImageNetで学習した重みを使用するか(既定値:imagenet)
  • input_shape:shapeのタプル(既定値:None)
    include_topがFalseの場合のみ指定可能(widthとheightは48以上の3チャネル)
    include_topがTrueの場合、input_shapeはNone(widthとheightは224の3チャネル) 20200328追記
model = VGG16(
    include_top=True,
    weights="imagenet",
    input_shape=None
)

予測画像読み込み

image_path = "img/golden_retriever.jpg"
image = Image.load_img(image_path) 
image

image.size

(270, 202)

# サイズを(224, 224)に変換
image = Image.load_img(image_path, target_size=(224, 224)) 
image.size

(224, 224)

# 配列に変換
x=Image.img_to_array(image)
x.shape

(224, 224, 3)

# 軸を指定して次元を増やす
x = np.expand_dims(x, axis=0) 
x.shape

(1, 224, 224, 3)

# 各画素値から平均値 (103.939, 116.779, 123.68) を引く
# カラーのチャネルの順番をRGB→BGR
x = preprocess_input(x)
x.shape

(1, 224, 224, 3)

VGG16モデルによる予測

result = model.predict(x)
# 予測結果の上位3件を(class, description, probability)に decode
result = decode_predictions(result, top=3)
result

[[('n02099601', 'golden_retriever', 0.9767477),
('n02102318', 'cocker_spaniel', 0.010859662),
('n02100877', 'Irish_setter', 0.004005171)]]

高い精度でgolden_retrieverにクラス分類していることがわかる。

【別途】VGG16モデルの特徴抽出

CNNでは浅い層ほど色やエッジ、ブロブといった単純な特徴を識別し、深くになるにつれ、それらを組み合わせた形状やオブジェクトといった高次元な特徴を識別 する傾向にある。つまり、ネットワーク内の各層は画像から重要な特徴を抽出する能力を保持 しているということが言える。

メリット

  • 他のタスクに応用可能
  • 学習済みの量に比例して、学習時間を削減
  • データ不十分でも、効果が期待
base_model = VGG16(
    include_top=False,
    weights="imagenet",
    input_shape=None
)
base_model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_4 (InputLayer)         (None, None, None, 3)     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, None, None, 64)    1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, None, None, 64)    36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, None, None, 64)    0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, None, None, 128)   73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, None, None, 128)   147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, None, None, 128)   0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, None, None, 256)   295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, None, None, 256)   590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, None, None, 256)   590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, None, None, 256)   0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, None, None, 512)   1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, None, None, 512)   0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, None, None, 512)   0         
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________

block4_pool層の特徴抽出

model = Model(
    inputs=base_model.input,
    outputs=base_model.get_layer("block4_pool").output
)
block4_pool_features = model.predict(x)
block4_pool_features.shape

(1, 14, 14, 512)

VGG16モデルを特徴抽出器として、全結合層ではなくSVM(Support Vector Machine)に渡すケースもあるとのこと。今後、機会があれば検証してみたい。

4
4
2

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
4
4

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?