11
13

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

Djangoで機械学習アプリケーション ~データ収集・モデル構築と評価~

Last updated at Posted at 2019-09-02

#画像認識モデルの構築評価まで

画像共有サイトFlickrから画像を取得し、CNNで機械学習(ML)を行なった。

サイト作成:https://qiita.com/T_Na/items/f87bbcecb3edfca8f07d
##データ収集
flickrapiにより画像取得。

data.py
from flickrapi import FlickrAPI
from urllib.request import urlretrieve
import sys,os,time
from pprint import pprint

key="###"
sercret="###"
imagename=sys.argv[1]
#コマンドプロンプト(Win)、ターミナル(Mac)上で、python data.py catなど.pyの後を指定して入力できるようにする。
save_dir="./image/"+imagename
if not os.path.exists(save_dir):os.mkdir(save_dir)
flickr=FlickrAPI(key,sercret,format="parsed-json")
flickr_result=flickr.photos.search(text=imagename,per_page=300,media="photos",
              sort="relevance",safe_search=1,extras="url_q,licence")
photos=flickr_result['photos']
#pprint(flickr_result)
for photo in photos['photo']:
    url_q = photo['url_q']
    filepath=save_dir+"/"+photo["id"]+".jpg"
    if os.path.exists(filepath):continue
    urlretrieve(url_q,filepath)
    time.sleep(1)
print(flickr_result)を実行
得られた情報から必要なものを利用して画像取得。(ここではphotos内のphotoで各画像の情報を取り出しurl_q:画像のURLを取り出し画像を得ている。)

{'photos': {'page': 1,
            'pages': 1,
            'perpage': 300,
            'photo': [{'farm': 6,
                       'height_q': '150',
                       'id': '11556879965',
                       'isfamily': 0,
                       'isfriend': 0,
                       'ispublic': 1,
                       'owner': '58701173@N07',
                       'secret': 'f4e362d0e8',
                       'server': '5490',
                       'title': 'monkry 006',
                       'url_q': 'https://live.staticflickr.com/5490/11556879965_f4e362d0e8_q.jpg',
                       'width_q': '150'},

##機械学習用のデータセット作成

get_data.py
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
import os,glob

imagename= []#取得した画像のラベル(ex:動物名や名称)
image_size=100
X,Y=[],[]

for labels ,imgname in enumerate(imagename):
    imgdir="./image/"+imgname
    files=glob.glob(imgdir+"/*.jpg")
    for index ,file in enumerate(files):
        if index>=300:break
        else:
            img=Image.open(file)
            img=img.convert("RGB")
            img=img.resize((image_size,image_size))
            #データ増量(-30~30°に15°ずつ回転し、それを左右反転することでデータを倍にする)
            #以下の工程では1枚を10枚に増量
            for angle in range(-30,30,15):
                img_rot=img.rotate(angle)
                img_tra=img.transpose(Image.FLIP_LEFT_RIGHT)
                data=np.asarray(img_tra)
                X.append(data)
                Y.append(labels)

x=np.array(X)
y=np.array(Y)
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=123)
#test_size:全体の0.2(20%)をテスト用にする
data_t=(x_train,x_test,y_train,y_test)
#データ保存
np.save("./data/data-1.npy",data_t)

##機械学習モデル

cnn.py

import keras
import numpy as np
from keras.models import Sequential
from keras.layers import Conv2D,MaxPooling2D,Dropout,Dense,Activation,Flatten
from keras.utils import np_utils
from keras.optimizers import Adam

imagename=[]#取得した画像のラベル(ex:動物名や名称)
size=120
epoch=50
image_size=100
image_len=len(imagename)
x_train,x_test,y_train,y_test=np.load("./data/data-1.npy")
x_train=x_train.astype("float")/255.0
x_test=x_test.astype("float")/255.0
y_train=y_train.astype("int")
y_test=y_test.astype("int")
y_train=np_utils.to_categorical(y_train,image_len)
y_test=np_utils.to_categorical(y_test,image_len)
#MLモデル構築
model=Sequential()
model.add(Conv2D(32,(3,3),padding="same",input_shape=x_train.shape[1:]))
model.add(Activation("relu"))
model.add(Conv2D(32,(3,3),padding="same"))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))

model.add(Conv2D(64,(3,3),padding="same"))
model.add(Activation("relu"))
model.add(Conv2D(64,(3,3),padding="same"))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))

model.add(Conv2D(128,(3,3),padding="same"))
model.add(Activation("relu"))
model.add(Conv2D(128,(3,3),padding="same"))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))

model.add(Flatten())
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(Dropout(0.5))
model.add(Dense(y_train.shape[1]))
model.add(Activation("softmax"))

model.compile(loss="categorical_crossentropy",optimizer="Adam",metrics=["accuracy"])
model.fit(x_train,y_train,batch_size=size,epochs=epoch,validation_split=0.1,shuffle=True,verbose=1)
#モデルを保存
model.save("./model/model-1.h5")
score=model.evaluate(x_test,y_test,verbose=0)
print("Test Loss:",score[0],"Test Accuracy:",score[1])

##学習結果

Train on 6472 samples, validate on 720 samples
Epoch 1/50
6472/6472 [==============================] - 101s 16ms/step - loss: 0.9608 - acc: 0.5246 - val_loss: 0.8161 - val_acc: 0.6444
Epoch 2/50
6472/6472 [==============================] - 100s 16ms/step - loss: 0.7509 - acc: 0.6673 - val_loss: 0.7241 - val_acc: 0.7014
Epoch 3/50
6472/6472 [==============================] - 100s 15ms/step - loss: 0.6099 - acc: 0.7494 - val_loss: 0.5272 - val_acc: 0.8028
(途中省略)
Epoch 47/50
6472/6472 [==============================] - 189s 29ms/step - loss: 0.0057 - acc: 0.9978 - val_loss: 3.0830e-04 - val_acc: 1.0000
Epoch 48/50
6472/6472 [==============================] - 189s 29ms/step - loss: 0.0084 - acc: 0.9963 - val_loss: 0.0066 - val_acc: 0.9972
Epoch 49/50
6472/6472 [==============================] - 189s 29ms/step - loss: 0.0052 - acc: 0.9988 - val_loss: 0.0065 - val_acc: 0.9986
Epoch 50/50
6472/6472 [==============================] - 190s 29ms/step - loss: 0.0106 - acc: 0.9960 - val_loss: 0.0135 - val_acc: 0.9986

Test Loss: 0.0032594245270682726 Test Accuracy: 0.9988876529477196

##モデルを評価する
学習させていない画像で判定

predict.py
import numpy as np
import keras,sys
from keras.models import load_model
from PIL import Image

imagename=[]#取得した画像のラベル(ex:動物名や名称)
image_len=len(imagename)
image_size=100

img_dir="./pics/"+sys.argv[1]
image=Image.open(img_dir)
image=image.convert("RGB")
image=image.resize((image_size,image_size))
data=np.asarray(image)/255.0
X=[]
X.append(data)
X=np.array(X)

model=load_model("./model/model.h5")
result=model.predict([X])[0]#[5.9185062e-19 9.9720252e-01 2.7975517e-03]ラベルの数で個数が変化
predicted=result.argmax()#resultの中で最大値のみ取得
percentage=int(result[predicted]*100)
print(imagename[predicted],percentage)

判定画像
img2.jpg

結果
サル 99
11
13
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
11
13

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?