1
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

Google Cloud Platform(GCP)での機械学習(GPU)

Last updated at Posted at 2019-07-28

Google Cloud Platform

Googleが提供するクラウドサービスの総称。(https://cloud.google.com)

機械学習データセット

cifar10を使った(ローカルCPUとGPUでの処理速度比べたい)
cifar10:10のクラスにラベル付けされた,50,000枚の32x32訓練用カラー画像,10,000枚のテスト用画像のデータセット.

環境

ローカル

OS:Windows10
エディタ: Atom
Anaconda

GPUのみこれを追加

GCP OS:ubuntu16.04 LS
GPU:Cuda 9.0
device name:Tesla K80

trainingコード

train_cifar10.py

import numpy as np
import matplotlib.pyplot as plt
import keras 
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Conv2D,Activation,MaxPooling2D,Dense,Dropout,Flatten
from keras.optimizers import Adam
from keras.utils import np_utils

# 下のコードでcifar10のデータ取得
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train=np.asarray(x_train).astype("float")/255.0
x_test=np.asarray(x_test).astype("float")/255.0
y_train=np_utils.to_categorical(y_train,10)
y_test=np_utils.to_categorical(y_test,10)
epoch=30
def train_model():
    model=Sequential()
    model.add(Conv2D(32,(3,3),padding="same",input_shape=x_train.shape[1:]))
    model.add(Activation("relu"))
    model.add(Conv2D(64,(3,3),padding="same"))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Dropout(0.25))
        
    model.add(Conv2D(128,(3,3),padding="same"))
    model.add(Activation("relu"))
    model.add(Conv2D(256,(3,3),padding="same"))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Dropout(0.25))

    model.add(Conv2D(256,(3,3),padding="same"))
    model.add(Activation("relu"))
    model.add(Conv2D(256,(3,3),padding="same"))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Dropout(0.25))

    model.add(Flatten())
    model.add(Dense(1024))
    model.add(Activation("relu"))
    model.add(Dropout(0.5))
    model.add(Dense(y_train.shape[1]))
    model.add(Activation("softmax"))
    model.compile(loss="categorical_crossentropy",optimizer="Adam",metrics=["accuracy"])
    result=model.fit(x_train,y_train,batch_size=120,epochs=epoch,validation_split=0.2,shuffle=True)

    #評価
  score=model.evaluate(x_test,y_test,verbose=0)
    print("Test Loss:",score[0])
    print("Test Accuracy:",score[1])

  #モデルの重み保存(predictやevalusteの時に使う)
    model.save("./cifar10_train.h5")
    return model

if __name__=="__main__":
    train_model()

Training

GPU(GCP train)

Epoch 1/30
2019-07-28 09:48:47.961033: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
2019-07-28 09:48:51.145668: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
50000/50000 [==============================] - 43s 863us/step - loss: 1.7028 - acc: 0.3595 - val_loss: 1.3475 - val_acc: 0.4981
Epoch 2/30
50000/50000 [==============================] - 32s 640us/step - loss: 1.1785 - acc: 0.5752 - val_loss: 0.9698 - val_acc: 0.6595
Epoch 3/30
50000/50000 [==============================] - 32s 639us/step - loss: 0.9198 - acc: 0.6775 - val_loss: 0.7873 - val_acc: 0.7246
Epoch 4/30
50000/50000 [==============================] - 32s 635us/step - loss: 0.7709 - acc: 0.7287 - val_loss: 0.7076 - val_acc: 0.7577
Epoch 5/30
50000/50000 [==============================] - 32s 637us/step - loss: 0.6777 - acc: 0.7638 - val_loss: 0.6583 - val_acc: 0.7758
Epoch 6/30
50000/50000 [==============================] - 32s 642us/step - loss: 0.6102 - acc: 0.7859 - val_loss: 0.6247 - val_acc: 0.7891
Epoch 7/30
50000/50000 [==============================] - 32s 644us/step - loss: 0.5575 - acc: 0.8044 - val_loss: 0.6225 - val_acc: 0.7926
Epoch 8/30
50000/50000 [==============================] - 32s 634us/step - loss: 0.5113 - acc: 0.8218 - val_loss: 0.5550 - val_acc: 0.8114
Epoch 9/30
50000/50000 [==============================] - 32s 640us/step - loss: 0.4731 - acc: 0.8329 - val_loss: 0.5555 - val_acc: 0.8117
Epoch 10/30
50000/50000 [==============================] - 32s 632us/step - loss: 0.4471 - acc: 0.8429 - val_loss: 0.5509 - val_acc: 0.8175
Epoch 11/30
50000/50000 [==============================] - 32s 636us/step - loss: 0.4145 - acc: 0.8558 - val_loss: 0.5768 - val_acc: 0.8170
Epoch 12/30
50000/50000 [==============================] - 32s 643us/step - loss: 0.3883 - acc: 0.8638 - val_loss: 0.5925 - val_acc: 0.8089
Epoch 13/30
50000/50000 [==============================] - 32s 639us/step - loss: 0.3715 - acc: 0.8695 - val_loss: 0.5669 - val_acc: 0.8156
Epoch 14/30
50000/50000 [==============================] - 32s 634us/step - loss: 0.3479 - acc: 0.8750 - val_loss: 0.6267 - val_acc: 0.8086
Epoch 15/30
50000/50000 [==============================] - 32s 641us/step - loss: 0.3352 - acc: 0.8806 - val_loss: 0.5758 - val_acc: 0.8277
Epoch 16/30
50000/50000 [==============================] - 31s 628us/step - loss: 0.3191 - acc: 0.8869 - val_loss: 0.5729 - val_acc: 0.8259
Epoch 17/30
50000/50000 [==============================] - 32s 642us/step - loss: 0.3068 - acc: 0.8912 - val_loss: 0.5528 - val_acc: 0.8331
Epoch 18/30
50000/50000 [==============================] - 32s 639us/step - loss: 0.2950 - acc: 0.8951 - val_loss: 0.5619 - val_acc: 0.8330
Epoch 19/30
50000/50000 [==============================] - 32s 640us/step - loss: 0.2897 - acc: 0.8986 - val_loss: 0.5595 - val_acc: 0.8319
Epoch 20/30
50000/50000 [==============================] - 32s 631us/step - loss: 0.2761 - acc: 0.9039 - val_loss: 0.6063 - val_acc: 0.8251
Epoch 21/30
50000/50000 [==============================] - 32s 631us/step - loss: 0.2736 - acc: 0.9039 - val_loss: 0.5647 - val_acc: 0.8375
Epoch 22/30
50000/50000 [==============================] - 31s 630us/step - loss: 0.2587 - acc: 0.9092 - val_loss: 0.5871 - val_acc: 0.8344
Epoch 23/30
50000/50000 [==============================] - 31s 629us/step - loss: 0.2578 - acc: 0.9109 - val_loss: 0.6191 - val_acc: 0.8303
Epoch 24/30
50000/50000 [==============================] - 31s 627us/step - loss: 0.2420 - acc: 0.9166 - val_loss: 0.5932 - val_acc: 0.8346
Epoch 25/30
50000/50000 [==============================] - 31s 628us/step - loss: 0.2473 - acc: 0.9135 - val_loss: 0.5977 - val_acc: 0.8358
Epoch 26/30
50000/50000 [==============================] - 31s 628us/step - loss: 0.2406 - acc: 0.9168 - val_loss: 0.5824 - val_acc: 0.8373
Epoch 27/30
50000/50000 [==============================] - 32s 630us/step - loss: 0.2355 - acc: 0.9179 - val_loss: 0.6245 - val_acc: 0.8322
Epoch 28/30
50000/50000 [==============================] - 31s 628us/step - loss: 0.2381 - acc: 0.9181 - val_loss: 0.5909 - val_acc: 0.8368
Epoch 29/30
50000/50000 [==============================] - 31s 629us/step - loss: 0.2292 - acc: 0.9211 - val_loss: 0.5889 - val_acc: 0.8408
Epoch 30/30
50000/50000 [==============================] - 32s 633us/step - loss: 0.2221 - acc: 0.9239 - val_loss: 0.5917 - val_acc: 0.8368
Test Loss: 0.591716582775116
Test Accuracy: 0.8368

CPU
Epoch 1/30
50000/50000 [==============================] - 216s 4ms/step - loss: 1.6905 - acc: 0.3657 - val_loss: 1.3237 - val_acc: 0.5209
Epoch 2/30
50000/50000 [==============================] - 215s 4ms/step - loss: 1.1577 - acc: 0.5846 - val_loss: 1.0158 - val_acc: 0.6408
Epoch 3/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.9370 - acc: 0.6677 - val_loss: 0.8442 - val_acc: 0.7018
Epoch 4/30
50000/50000 [==============================] - 216s 4ms/step - loss: 0.8047 - acc: 0.7174 - val_loss: 0.7127 - val_acc: 0.7519
Epoch 5/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.7146 - acc: 0.7494 - val_loss: 0.6724 - val_acc: 0.7659
Epoch 6/30
50000/50000 [==============================] - 217s 4ms/step - loss: 0.6437 - acc: 0.7736 - val_loss: 0.6371 - val_acc: 0.7783
Epoch 7/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.5817 - acc: 0.7962 - val_loss: 0.6174 - val_acc: 0.7872
Epoch 8/30
50000/50000 [==============================] - 216s 4ms/step - loss: 0.5347 - acc: 0.8120 - val_loss: 0.5697 - val_acc: 0.8086
Epoch 9/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.4989 - acc: 0.8255 - val_loss: 0.5885 - val_acc: 0.8058
Epoch 10/30
50000/50000 [==============================] - 216s 4ms/step - loss: 0.4632 - acc: 0.8362 - val_loss: 0.5526 - val_acc: 0.8129
Epoch 11/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.4324 - acc: 0.8483 - val_loss: 0.5568 - val_acc: 0.8198
Epoch 12/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.4154 - acc: 0.8542 - val_loss: 0.5906 - val_acc: 0.8065
Epoch 13/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.3894 - acc: 0.8634 - val_loss: 0.5491 - val_acc: 0.8212
Epoch 14/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.3727 - acc: 0.8697 - val_loss: 0.5525 - val_acc: 0.8212
Epoch 15/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.3477 - acc: 0.8784 - val_loss: 0.5607 - val_acc: 0.8241
Epoch 16/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.3404 - acc: 0.8804 - val_loss: 0.5476 - val_acc: 0.8287
Epoch 17/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.3208 - acc: 0.8861 - val_loss: 0.5676 - val_acc: 0.8302
Epoch 18/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.3099 - acc: 0.8898 - val_loss: 0.5521 - val_acc: 0.8274
Epoch 19/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.2986 - acc: 0.8927 - val_loss: 0.5752 - val_acc: 0.8226
Epoch 20/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.2948 - acc: 0.8978 - val_loss: 0.5946 - val_acc: 0.8218
Epoch 21/30
50000/50000 [==============================] - 216s 4ms/step - loss: 0.2755 - acc: 0.9036 - val_loss: 0.6167 - val_acc: 0.8211
Epoch 22/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.2726 - acc: 0.9050 - val_loss: 0.5721 - val_acc: 0.8363
Epoch 23/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.2623 - acc: 0.9081 - val_loss: 0.5658 - val_acc: 0.8354
Epoch 24/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.2633 - acc: 0.9080 - val_loss: 0.5759 - val_acc: 0.8347
Epoch 25/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.2575 - acc: 0.9105 - val_loss: 0.6118 - val_acc: 0.8337
Epoch 26/30
50000/50000 [==============================] - 215s 4ms/step - loss: 0.2530 - acc: 0.9121 - val_loss: 0.5677 - val_acc: 0.8370
Epoch 27/30
50000/50000 [==============================] - 222s 4ms/step - loss: 0.2432 - acc: 0.9155 - val_loss: 0.6069 - val_acc: 0.8317
Epoch 28/30
50000/50000 [==============================] - 233s 5ms/step - loss: 0.2387 - acc: 0.9163 - val_loss: 0.5798 - val_acc: 0.8387
Epoch 29/30
50000/50000 [==============================] - 232s 5ms/step - loss: 0.2392 - acc: 0.9163 - val_loss: 0.6137 - val_acc: 0.8265
Epoch 30/30
50000/50000 [==============================] - 230s 5ms/step - loss: 0.2395 - acc: 0.9169 - val_loss: 0.5662 - val_acc: 0.8392
Test Loss: 0.5662494749784469
Test Accuracy: 0.8392

1epochあたりの時間
GPU:30s前後
CPU:220~230s
大体、6倍くらい変わった。

もっといろんなデータを試したい。

追記

別の画像を検証したとき
検証画像
airplane.jpg

ディレクトリ:下記のコードを書いたディレクトリ(カレントディレクトリ)にpicsというファイルを作成しそこに格納
"./pics/{各々で決定}.jpg"

predict.py

import numpy as np 
from PIL import Image 
import keras,sys,os 
from keras.datasets import cifar10
from keras.models import Sequential,load_model 
from keras.layers import Conv2D,MaxPooling2D,Dropout,Dense,Flatten,Activation
from keras.optimizers import Adam 
from keras.utils import np_utils
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train=np.asarray(x_train).astype("float")/255.0
x_test=np.asarray(x_test).astype("float")/255.0
y_train=np_utils.to_categorical(y_train,10)
y_test=np_utils.to_categorical(y_test,10)
def model_build():
    model=Sequential()
    model.add(Conv2D(32,(3,3),padding="same",input_shape=x_train.shape[1:]))
    model.add(Activation("relu"))
    model.add(Conv2D(64,(3,3),padding="same"))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Dropout(0.25))
        
    model.add(Conv2D(128,(3,3),padding="same"))
    model.add(Activation("relu"))
    model.add(Conv2D(256,(3,3),padding="same"))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Dropout(0.25))

    model.add(Conv2D(256,(3,3),padding="same"))
    model.add(Activation("relu"))
    model.add(Conv2D(256,(3,3),padding="same"))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Dropout(0.25))

    model.add(Flatten())
    model.add(Dense(1024))
    model.add(Activation("relu"))
    model.add(Dropout(0.5))
    model.add(Dense(y_train.shape[1]))
    model.add(Activation("softmax"))

    model.compile(loss="categorical_crossentropy",optimizer="Adam",metrics=["accuracy"])
    model=load_model("cifar10_train.h5")#保存した重みをロード(読み込み)
    return model 

def main():
    X=[]
    img_dir="./pics/airplane.jpg"
    img=Image.open(img_dir)
    img=img.resize((32,32))
    img=np.asarray(img)
    X.append(img)
    X=np.array(X)
    model=model_build()
    result=model.predict(X)[0]
    pre=result.argmax()
    percent=int(result[pre]*100)
    img_pred=model.predict_classes(X)
    print("pred:{0}:{1}%".format(img_pred,percent))

if __name__=="__main__":
    main()

・出力
print(result):[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
print("pred:{0}:{1}%".format(img_pred,percent)):pred:[0]:100%

print(result)では各ラベルにおいて入力画像データがどれに対応しているかを出力し、result.argmax()でその中の最大値のみ抜き出した。ここではresult[0]が対応している。

cifar10ではラベル0は飛行機だから正しくすh津力で来ている。

1
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?