5
6

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

眼鏡っ娘分類システムの改良(判別分析12種類:SVM, Random Forest, Deep Learning 他)

Last updated at Posted at 2016-07-15

Background

Summary

方法

  • 履歴書みたいな画像を620枚入手した
  • 画像を眼鏡ありと眼鏡なしでラベル付けした
  • トレーニングデータ約600枚とテストデータ約20枚に分けた
  • 12種類の判別分析方法で、眼鏡有無を分類できるか試してみた

結論

  • Deep Learningは同率一位だが、同率1位が5つもあって、one of themといった感じ。
順位 手法 正解率
1位 Deep Learning 0.938
1位 Decision tree 0.938
1位 Nural Network LVQ1 0.938
1位 Bagging 0.938
1位 Boosting 0.938
2位 Random Forest 0.875
2位 k-Nearest Neighbor 0.875
2位 SVM 0.875

感想

  • 予想していたより、どの手法も成績がよかった。
  • なんでこんなによいのだろう?
  • 画像の素性抽出がバッチリだから?とても単純にみえるのに。
  • それぞれの手法がどう判定を下したのか調べてみたい(そのうち、、)

Details

方法詳細

これを参照
1.  ググったサイトを見ながら、RでSVMをやってみた。眼鏡の人とそれ以外の分類
2.  Rで判別分析いろいろ(11種類+ Deep Learning)

使用データ

履歴書みたいな画像 48x27 = 1296要素

データ種別 サンプル数
トレーニングデータ 約600枚
テストデータ 約20枚

結果(前回11種類+Deep Learning)

手法 package 関数 学習
データ
正解率
テスト
データ
正解率
備考
Deep Learning H2O h2o.deeplearning 1 0.938 activation="RectifierWithDropout",
epochs=100,
hidden=c(64,64,64,64),
rate=0.01,
rate_annealing = 1e-7,
input_dropout_ratio = 0.1
線形判別分析 MASS lda 計算失敗 計算失敗 エラー:variable 66 appears to be constant within groups
非線形判別分析(2次式) MASS qda 計算失敗 計算失敗 エラー:some group is too small for 'qda'
k-Nearest Neighbor class knn 0.866 0.875 k=5
ナイーブベイズ e1071 naiveBayes 0.508 0.438
Decision tree rpart rpart 0.913 0.938
Nural Network 3層 nnet nnet 計算失敗 計算失敗 エラー:too many (9094) weights
Nural Network LVQ1 class lvq1 0.999 0.938 k=1, alpha=0.125
SVM e1071 svm 1 0.875 gamma=0.1767767, cost=2.828427
Bagging adabag bagging 0.944 0.938
Boosting adabag boosting 1 0.938
Random Forest randomForest randomForest 1 0.875 mtry=45

画像の素性ベクトル作成

元ネタはこちら)

library(biOps)

# 縮小サイズ
kXBase = 48
kYBase = 27
kXYVecMax = kXBase * kYBase

CreateDataset <- function(){
  # 画像オブジェクト生成"
  gl_imgs   <- ReadImages("./data/glasses/")
  none_imgs <- ReadImages("./data/no_glasses/")
  # 素性ベクトル作成"
  gl_df     <- ConvertImagesToFeatureVectors(gl_imgs,   "glasses")
  none_df   <- ConvertImagesToFeatureVectors(none_imgs, "none")
  # データセット作成"
  df        <- rbind(gl_df, none_df)
  return(df)  
}

CreateSamples <- function(type){
  gl_imgs   <- ReadImages("./data/sample_glasses/")
  none_imgs <- ReadImages("./data/sample_no_glasses/")
  
  gl_df     <- ConvertImagesToFeatureVectors(gl_imgs, "glasses")
  none_df   <- ConvertImagesToFeatureVectors(none_imgs, "none")
  # データセット作成"
  df        <- rbind(gl_df, none_df)
  return(df)  
}

ReadImages <- function(dir){
  print(dir)
  filenames <- list.files(dir, "*.jpg", full.names=T)
  images <- lapply(filenames, readJpeg)
  print(filenames)
  return(images)
}

# 読み込んだ jpg を kXBase x kYBase にダウンサイジング
DoDownsising <- function(img){
  x_size <- ncol(img)
  y_size <- nrow(img)
  x_scale <- kXBase / x_size
  y_scale <- kYBase / y_size
  img <- imgAverageShrink(img, x=x_scale, y=y_scale)
  return(img)
}


# 読み込んだ jpg を グレースケール変換
ToGrayscale <- function(img){
  return(imgRGB2Grey(img, coefs=c(0.30, 0.59, 0.11)))
}

# 読み込んだ jpg を エッジ強調
EmphasizeEdge <- function(img){
  img <- imgCanny(img, sigma=0.4)
  return(img)
}

# 素性ベクトル作成
ConvertImageToFeatureVector <- function(img){
  # 1. 元画像
  # 2. グレースケール変換
  # black: 0, white: 255
  g_img <- ToGrayscale(img)
  # 3. エッジ強調
  ge_img <- EmphasizeEdge(g_img)
  # 4. ダウンサイジング
  ged_img <- DoDownsising(ge_img)
  vec <- as.vector(ged_img)
  # black: 0 to white: 1 
  normalized_vec <- vec / 255
  return(normalized_vec)
}

# 全イメージを素性ベクトルに変換
ConvertImagesToFeatureVectors <- function(imgs, label){
  vectors_list <- lapply(imgs, ConvertImageToFeatureVector)
  vectors_list <- lapply(vectors_list, (function(x) {x[1:kXYVecMax]}))
  df <- as.data.frame(do.call("rbind", vectors_list))
  df[is.na(df)] <- 1
  df$label <- as.factor(label)
  return(df)
}

# MAIN
dataset.train <- CreateDataset()
dataset.test  <- CreateSamples()

write.csv(x = dataset.train, file = "train.csv", row.names = FALSE)
write.csv(x = dataset.test,  file = "test.csv" , row.names = FALSE)

線形判別分析

#
# 線形判別分析
#

# dataset
data.train <- read.csv("train.csv")
data.test  <- read.csv("test.csv")

# model
library(MASS)
M<-lda(label~.,data=data.train)
(M.tab<-table(data.train[,ncol(data.train)],predict(M)$class))
sum(M.tab[row(M.tab)==col(M.tab)])/sum(M.tab)

# predict
P=predict(M,data.test[,-ncol(data.test)])

# result
t<-table(data.test[,ncol(data.test)],P$class)
a<-(t[1,1] + t[2,2]) / (t[1,1] + t[2,2] + t[1,2] + t[2,1])
A.lda<-round(a,3)
A.lda

非線形判別分析(2次式)

#
# 非線形判別分析(2次式)
#

# dataset
data.train <- read.csv("train.csv")
data.test  <- read.csv("test.csv")

# model
library(MASS)
M<-qda(label~.,data=data.train)
(M.tab<-table(data.train[,ncol(data.train)],predict(M)$class))
sum(M.tab[row(M.tab)==col(M.tab)])/sum(M.tab)

# predict
P<-predict(M,data.test[,-ncol(data.test)])

# result
t<-table(data.test[,ncol(data.test)],P$class)
a<-(t[1,1] + t[2,2]) / (t[1,1] + t[2,2] + t[1,2] + t[2,1])
A.qda<-round(a,3)
A.qda

k-Nearest Neighbor

#
# 非線形判別分析 k-Nearest Neighbor
#

# dataset
data.train <- read.csv("train.csv")
data.test  <- read.csv("test.csv")

# model / prediction
library(class)
V<-knn(data.train[,-ncol(data.train)],data.train[,-ncol(data.train)],data.train[,ncol(data.train)],k=5)
P<-knn(data.train[,-ncol(data.train)],data.test[,-ncol(data.test)],data.train[,ncol(data.train)],k=5)

# result
u<-table(data.train[,ncol(data.train)],V) # for train
b<-(u[1,1] + u[2,2]) / (u[1,1] + u[2,2] + u[1,2] + u[2,1])

t<-table(data.test[,ncol(data.test)],P)   # for test
a<-(t[1,1] + t[2,2]) / (t[1,1] + t[2,2] + t[1,2] + t[2,1])

b # accuracy for train
a # accuracy for test

ナイーブベイズ

#
# ナイーブベイズ
#

# dataset
data.train <- read.csv("train.csv")
data.test  <- read.csv("test.csv")

# model
library(e1071)
M<-naiveBayes(label~.,data.train)

# predict
V<-predict(M,data.train[,-ncol(data.train)])
P<-predict(M,data.test[,-ncol(data.test)])


# result
u<-table(data.train[,ncol(data.train)],V) # for train
b<-(u[1,1] + u[2,2]) / (u[1,1] + u[2,2] + u[1,2] + u[2,1])

t<-table(data.test[,ncol(data.test)],P)   # for test
a<-(t[1,1] + t[2,2]) / (t[1,1] + t[2,2] + t[1,2] + t[2,1])

b # accuracy for train
a # accuracy for test

Decision tree

#
# Decision tree
#

# dataset
data.train <- read.csv("train.csv")
data.test  <- read.csv("test.csv")

# model
library(rpart)
M<-rpart(label~.,data=data.train)

# predict
V<-predict(M,data.train[,-ncol(data.train)],type="class")
P<-predict(M,data.test[,-ncol(data.test)],type="class")

# result
u<-table(data.train[,ncol(data.train)],V) # for train
b<-(u[1,1] + u[2,2]) / (u[1,1] + u[2,2] + u[1,2] + u[2,1])

t<-table(data.test[,ncol(data.test)],P)   # for test
a<-(t[1,1] + t[2,2]) / (t[1,1] + t[2,2] + t[1,2] + t[2,1])

b # accuracy for train
a # accuracy for test

Nural Network 3層

#
# 3 layer nural network (nnet package)
#

# Create sample
data.train <- read.csv("train.csv")
data.test  <- read.csv("test.csv")

# Nural Network
library(nnet)

# model
M<-nnet(label~.,size=7,decay=0.3,data=data.train)

# prediction
P<-predict(M,data.test[,-ncol(data.test)],type="class")

# result
u<-table(data.train[,ncol(data.train)],V) # for train
b<-(u[1,1] + u[2,2]) / (u[1,1] + u[2,2] + u[1,2] + u[2,1])

t<-table(data.test[,ncol(data.test)],P)   # for test
a<-(t[1,1] + t[2,2]) / (t[1,1] + t[2,2] + t[1,2] + t[2,1])

b # accuracy for train
a # accuracy for test

Nural Network LVQ1

#
# Nural Networl (Learning Vector Quantization)
#

# Create sample
data.train <- read.csv("train.csv")
data.test  <- read.csv("test.csv")

num<-10*(1:(nrow(data.train)/10))
data.train.test<-data.train[num,]   # for tunning (test)
data.train.train<-data.train[-num,] # for tuuning (training)

# Prepare parallel
library(doParallel)
cl<-makeCluster(detectCores()-1)
registerDoParallel(cl)


# Nural Networl (Learning Vector Quantization)
library(class)

A<-3
B<-8

xa<-array(0,dim=c(A,B)) # for test
xb<-array(0,dim=c(A,B)) # for train


for(k in 1:A ) {
  Minit<-lvqinit(data.train[,-ncol(data.train)],data.train[,ncol(data.train)],k = A^(k-1))

  r<-foreach(i=1:B, .combine=cbind, .packages="class") %dopar% {
    M<-lvq1(data.train[,-ncol(data.train)],data.train[,ncol(data.train)],Minit, alpha = i/B)
    V<-lvqtest(M,data.train[,-ncol(data.train)])
    P<-lvqtest(M,data.test[,-ncol(data.test)])

    u<-table(data.train[,ncol(data.train)],V)
    t<-table(data.test[,ncol(data.test)],P)

    rb<-(u[1,1] + u[2,2]) / (u[1,1] + u[2,2] + u[1,2] + u[2,1])
    ra<-(t[1,1] + t[2,2]) / (t[1,1] + t[2,2] + t[1,2] + t[2,1])
	    list(k,i,ra,rb)
  }
  for(i in 1:B){
    xa[k,r[[2,i]]] <- r[[3,i]] # 並列化のため、rに入っている順番はバラバラ
    xb[k,r[[2,i]]] <- r[[4,i]] # rの中に記録されているiをつかって、xa, xbの列を決める
  }
}

b<-max(xb)
a<-max(xa)

print(xb)
print(max(xb))
print(which.max(xb))
print(xa)
print(max(xa))
print(which.max(xa))
b
a

# Stop Cluster
stopCluster(cl)

SVM

#
# Support Vector Machine
#

# dataset
data.train <- read.csv("train.csv")
data.test  <- read.csv("test.csv")

# model
library(e1071)
M <- svm(
  label ~ .,
  data  = data.train,
  gamma = 0.1767767,
  cost  = 2.828427
)

# prediction
V <- predict(M, data.train)
P <- predict(M, data.test)

# result
u<-table(data.train[,ncol(data.train)],V) # for train
b<-(u[1,1] + u[2,2]) / (u[1,1] + u[2,2] + u[1,2] + u[2,1])

t<-table(data.test[,ncol(data.test)],P)   # for test
a<-(t[1,1] + t[2,2]) / (t[1,1] + t[2,2] + t[1,2] + t[2,1])

b # accuracy for train
a # accuracy for test

SVMのエラーが出た。エラー内容はスケーリングに関するものだった。
http://www.sakurai.comp.ae.keio.ac.jp/classes/IntInfProc-class/2015/10SVM.pdf

Bagging

#
# Bagging
#

# dataset
data.train <- read.csv("train.csv")
data.test  <- read.csv("test.csv")

# model
library(adabag)
M<-bagging(label~.,data=data.train)

# prediction
V<-predict(M,data.train[,-ncol(data.train)])
P<-predict(M,data.test[,-ncol(data.test)])

# result
u<-table(data.train[,ncol(data.train)],V$class) # for train
b<-(u[1,1] + u[2,2]) / (u[1,1] + u[2,2] + u[1,2] + u[2,1])

t<-table(data.test[,ncol(data.test)],P$class)   # for test
a<-(t[1,1] + t[2,2]) / (t[1,1] + t[2,2] + t[1,2] + t[2,1])

b # accuracy for train
a # accuracy for test

Boosting

#
# Boosting
#

# dataset
data.train <- read.csv("train.csv")
data.test  <- read.csv("test.csv")

# model 
library(adabag)
M<-boosting(label~.,data=data.train)

# predict
V<-predict(M,data.train[,-ncol(data.train)])
P<-predict(M,data.test[,-ncol(data.test)])

# result
u<-table(data.train[,ncol(data.train)],V$class) # for train
b<-(u[1,1] + u[2,2]) / (u[1,1] + u[2,2] + u[1,2] + u[2,1])

t<-table(data.test[,ncol(data.test)],P$class)   # for test
a<-(t[1,1] + t[2,2]) / (t[1,1] + t[2,2] + t[1,2] + t[2,1])

b # accuracy for train
a # accuracy for test

Random Forest

#
# Random Forest
#

# dataset
data.train <- read.csv("train.csv")
data.test  <- read.csv("test.csv")

# model
library(randomForest)
M<-randomForest(label~.,data=data.train,na.action="na.omit",mtry=45)

# predict
V<-predict(M,data.train[,-ncol(data.train)])
P<-predict(M,data.test[,-ncol(data.test)])

# result
u<-table(data.train[,ncol(data.train)],V) # for train
b<-(u[1,1] + u[2,2]) / (u[1,1] + u[2,2] + u[1,2] + u[2,1])

t<-table(data.test[,ncol(data.test)],P)   # for test
a<-(t[1,1] + t[2,2]) / (t[1,1] + t[2,2] + t[1,2] + t[2,1])

b # accuracy for train
a # accuracy for test

References

  1. http://qiita.com/nrhk/items/2f4a5bfe0585472823f6
  2. http://qiita.com/nrhk/items/1694fb6937301fffea94
  3. http://www.slideshare.net/tojimat/tokyor-41?next_slideshow=1
5
6
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
5
6

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?