4
7

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

Raspbian Jessie+ROS Kinetic+samba4.2.14+Python3.4.7+Tensorflow1.3.0+Keras2.1.2+OpenCV3.3.1+Jupyter Notebook+Single Shot Multibox Detector(SSD)環境の構築

Last updated at Posted at 2017-12-06

I wrote it in English in the comment section.
#◆ はじめに
Raspberry Pi 3 model B+ へ、タイトル記載のROS+ディープラーニング(DeepLearning)環境をインストール・構築する。
OSを導入するところからのクリーンな状態での作業を前提とし、初期状態から着手すれば、ほぼコピー&ペーストだけでつまづくことなく完了するはず。

ごちゃごちゃするのは好きではないため、pyenv, virtualenv は使用しない。
パーミッションの設定は適当とし、セキュリティは特に考慮しない。
同じパッケージを何度も導入しようとしているところはご愛敬で。
ノーミス・ノンストップで走っても半日以上かかるのでかなりの覚悟が必要。
Intel Movidius Neural Compute Stickが手元に届いたら公式非サポートは承知のうえで導入にトライしてみようと思う。
次回は、Raspbian Stretch ベースでIntel Movidius Neural Compute Stick + 以下の手順を再整理しようと思う。

★[次回] NCS(Neural Compute Stick)による動体検知環境構築
https://qiita.com/PINTO/items/b084fe3dc716c42e2867
#◆ 環境 [2017.12.06時点]
・Windows 10 Pro [作業用PC] + TeraTerm
・Raspberry Pi 3 model B+
・Raspbian Jessie 2017-07-05
・ROS 1.12.11
・samba 4.2.14
・Python 2.7.9、3.4.2(OS標準導入済)
・Python 3.4.7(追加インストール対象)
・Tensorflow 1.3.0
・Keras 2.1.2
・OpenCV 3.3.1
・Jupyter notebook
・MicroSD Card class10 SDHC 32GB
・USBキーボード、USBマウス、液晶テレビ、HDMIケーブル
・インターネット接続可能な有線LAN 又は Wi-Fi環境

#◆ Raspbian Jessieの導入
1.下記をダウンロード
  http://ftp.jaist.ac.jp/pub/raspberrypi/raspbian/images/raspbian-2017-07-05/2017-07-05-raspbian-jessie.zip
2.ダウンロードされたzipを解凍
3.Win32DiskImagerをダウンロードしてインストール
  https://ja.osdn.net/projects/sfnet_win32diskimager/downloads/Archive/win32diskimager-1.0.0-install.exe/
4.SDカードをホストPCに挿入/接続
5.Win32DiskImagerを起動し、2017-07-05-raspbian-jessie.imgを指定して書き込み
6.Raspberry Pi 3 へSDカードを挿入して電源ON
7.スタートメニュー→設定→RaspberryPiの設定→インタフェース→必要なオプションを「有効」にする
8.スタートメニュー→設定→RaspberryPiの設定→ローカライゼーション
 (1)ロケールの設定「ja」「JP」「UTF-8」
 (2)タイムゾーン「Asia」「Tokyo」
 (3)キーボードの設定「日本」「日本語」
 (4)無線LANの国「JP Japan」
9.「端末」アプリで下記コマンドを実行

$ sudo dpkg-reconfigure keyboard-configuration

10.Generic 105-key (Intl) PC → Japanese → Japanese → The default for the keyboard layout → No compose key
11.Wi-Fiの省電力モード無効化

$ sudo nano /etc/network/interfaces

  ※wpa-conf /etc/wpa_supplicant/*.confの記述の直下に下記を記載
   wireless-power off
12.rootユーザのパスワード設定

$ sudo passwd root ※好きなパスワードを登録

#◆ micro SDカードのパーティション拡張

$ sudo raspi-config
7.Advanced Options → A1.Expand Filesystem
yes
Finish
reboot now? → yes
$ df -h
※/dev/rootのサイズが大きくなっていたら成功

#◆ ROS [kinetic]のインストール

1.下記のコマンドを実行

$ sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
$ wget https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -O - | sudo apt-key add -
$ sudo apt-get update;sudo apt-get -y upgrade
$ sudo apt-get install -y python-rosdep python-rosinstall-generator python-wstool python-rosinstall build-essential cmake
$ sudo rosdep init
$ rosdep update
$ mkdir -p ~/ros_catkin_ws
$ pushd ~/ros_catkin_ws
$ rosinstall_generator ros_comm common_msgs tf --rosdistro kinetic --deps --wet-only --tar > kinetic-ros_comm-wet.rosinstall
$ wstool init src kinetic-ros_comm-wet.rosinstall
$ rosdep install -y --from-paths src --ignore-src --rosdistro kinetic -r --os=debian:jessie
$ sudo ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/kinetic -j2
$ popd
$ echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
$ source ~/.bashrc

#◆ 環境変数の追加

1.下記のコマンドを実行

$ export ROS_HOSTNAME=raspberrypi.local
$ export ROS_IP=`hostname -I`
$ export ROS_MASTER_URI=http://192.168.xxx.xxx:11311

※`hostname -I`の部分はPCのホスト名を入力するのではなく、hostnameとそのままの文脈で入力する
※RaspberryPi側でマスターを起動する場合は、export ROS_MASTER_URI=http://`hostname -I`:11311
※上記の環境変数を削除する場合は、unset ROS_MASTER_URI
※RaspberryPi以外の端末をマスターとする場合はIP部分を読み替える

 例)リモートのUbuntuをマスターとする場合は、Ubuntu側で下記を追加
   PC起動のたびに下記コマンドを打つ必要があるため、~/.bashrcに書いておくと良い

   $ export ROS_HOSTNAME=ubuntu.local
   $ export ROS_IP=`hostname -I`
   $ export ROS_MASTER_URI=http://`hostname -I`:11311

#◆ Sambaのインストール

1.下記のコマンドを実行

$ sudo apt-get install -y samba
$ sudo nano /etc/samba/smb.conf

2.設定ファイル(smb.conf)に下記を追記して保存

[pi]
path = /home/pi
read only = No
guest ok = Yes
force user = pi

[etc]
path = /etc
read only = No
guest ok = Yes
force user = root

[usr]
path = /usr
read only = No
guest ok = Yes
force user = root

3.sambaのデーモンを再起動する

$ sudo service smbd restart

#◆ TMP領域の拡張とログファイルのRAMDISK化

1.下記のコマンドを実行

$ cd /etc
$ sudo cp fstab fstab_org
$ sudo nano /etc/fstab

2.ファイルの末尾に下記を追記して保存

# TMP領域の拡張
tmpfs /tmp tmpfs defaults,size=512m,noatime,mode=1777 0 0
tmpfs /var/tmp tmpfs defaults,size=128m,noatime,mode=1777 0 0
# /var/log をRAMディスクにマウント
tmpfs /var/log tmpfs defaults,size=32m,noatime,mode=0755 0 0
# ~/.ros/log をRAMディスクにマウント
tmpfs /home/pi/.ros/log tmpfs defaults,size=32m,noatime,mode=1777 0 0

3.Raspberry Piを再起動

$ sudo reboot

#◆ 無用なシステムログの無効化
1.下記コマンドを実行

$ cd /etc
$ sudo cp rsyslog.conf rsyslog.conf_org
$ sudo nano rsyslog.conf

2.下記のように書き換えて保存する

###############
#### RULES ####
###############

#
# First some standard log files.  Log by facility.
#
auth,authpriv.*                 /var/log/auth.log
*.*;auth,authpriv.none          -/var/log/syslog
#cron.*                         /var/log/cron.log
daemon.*                        -/var/log/daemon.log
#kern.*                         -/var/log/kern.log
#lpr.*                          -/var/log/lpr.log
#mail.*                         -/var/log/mail.log
#user.*                         -/var/log/user.log

#
# Logging for the mail system.  Split it up so that
# it is easy to write scripts to parse these files.
#
#mail.info                      -/var/log/mail.info
#mail.warn                      -/var/log/mail.warn

#◆ Raspberry Pi起動時にログ出力先フォルダを自動生成するよう変更

1.下記コマンドを実行

$ cd /etc
$ sudo cp rc.local rc.local_org
$ sudo nano rc.local

2.既に入力済みの部分を下記のように書き換えて保存する

# Print the IP address
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
  printf "My IP address is %s\n" "$_IP"
fi

# 
# RAMディスクに自動的にフォルダを追加
# 
mkdir -p /var/log/ConsoleKit
mkdir -p /var/log/samba
mkdir -p /var/log/fsck
mkdir -p /var/log/apt
mkdir -p /var/log/ntpstats
chown root.ntp /var/log/ntpstats
chown root.adm /var/log/samba
touch /var/log/lastlog
touch /var/log/wtmp
touch /var/log/btmp
chown root.utmp /var/log/lastlog
chown root.utmp /var/log/wtmp
chown root.utmp /var/log/btmp

exit 0

#◆ 必要なパッケージの一部分インストール

1.下記のコマンドを実行する

$ sudo apt-get install -y build-essential libc6-dev libncurses5-dev libncursesw5-dev libreadline6-dev libdb5.3-dev libgdbm-dev libsqlite3-dev libssl-dev libbz2-dev libexpat1-dev liblzma-dev zlib1g-dev cmake git pkg-config unzip qtbase5-dev python-dev python3-dev python-numpy python3-numpy cmake-qt-gui mesa-utils libgl1-mesa-dri libprotobuf-dev protobuf-compiler libvtk5-dev libvtk5-qt4-dev python-vtk tcl-vtk libopencv-dev libgtk-3-dev libdc1394-22 libdc1394-22-dev libjpeg-dev libpng12-dev libtiff5-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libxine2-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev v4l-utils gfortran python-opencv libgtk2.0-dev libx264-dev libqt4-core libqtgui4 libqt4-test libqt4-opengl-dev libatlas-base-dev libeigen3-dev libtesseract-dev libleptonica-dev tesseract-ocr tesseract-ocr-jpn tesseract-ocr-osd libblas-dev liblapack-dev python-setuptools python3-decorator python3-scipy python3-pandas python3-h5py libhdf5-dev libpq5 libpq-dev
$ sudo pip3 install h5py;sudo pip3 install imageio;sudo apt-get update;sudo apt-get upgrade;sudo pip3 install --upgrade pillow matplotlib

#◆ Python3.4.7のインストール

1.下記のコマンドを実行

$ cd ~
$ wget https://www.python.org/ftp/python/3.4.7/Python-3.4.7.tgz
$ tar -zxvf Python-3.4.7.tgz
$ cd Python-3.4.7
$ ./configure
$ sudo make -j $(($(nproc) + 1))
$ sudo make install
$ sudo pip3 install -U pip;sudo pip3 install -U setuptools
$ sudo reboot

#◆ Python3.x版 Tensorflow1.3.0のインストール

1.下記のコマンドを実行

$ git clone https://github.com/DeftWork/rpi-tensorflow.git
$ cd rpi-tensorflow
$ sudo pip3 install tensorflow-1.3.0-cp34-cp34m-linux_armv7l.whl
$ sudo apt-get update
$ python3

Python 3.4.7 (default, Dec  5 2017, 01:51:58)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
>>> exit()

#◆ Keras2.1.2インストール(20171205時点:Python3.5まで対応)

1.下記のコマンドを実行

 $ sudo pip3 install keras

2.Tesnorflowをバックエンドで指定するjsonファイルを用意

$ cd ~
$ mkdir .keras
$ nano ~/.keras/keras.json

3.空っぽのファイルが開くので下記を追記して保存

{
  "image_data_format": "channels_last",
  "epsilon": 1e-07,
  "floatx": "float32",
  "backend": "tensorflow"
}

4.実行確認、importしてエラーが出なければ正常

$ python3

Python 3.4.7 (default, Nov 19 2017, 20:51:35)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import keras
Using TensorFlow backend.
>>> exit()

#◆ OpenCV3.3.1のインストール

1.下記のコマンドを実行

$ cd ~
$ git clone https://github.com/opencv/opencv.git
$ git clone https://github.com/opencv/opencv_contrib.git
$ cd opencv;mkdir build;cd build
$ sudo cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D PYTHON_DEFAULT_EXECUTABLE=$(which python3) -D INSTALL_PYTHON_EXAMPLES=OFF -D BUILD_opencv_python2=OFF -D BUILD_opencv_python3=ON -D WITH_OPENCL=OFF -D WITH_OPENGL=OFF -D WITH_TBB=OFF -D BUILD_TBB=OFF -D WITH_CUDA=OFF -D ENABLE_NEON:BOOL=ON -D WITH_QT=OFF -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules -D BUILD_opencv_dnn_modern=OFF ..
$ sudo make -j $(($(nproc) + 1))
$ sudo make install
$ sudo /bin/bash -c 'echo "/usr/local/lib" > /etc/ld.so.conf.d/opencv.conf'
$ sudo /bin/bash -c 'echo "/usr/lib" >> /etc/ld.so.conf.d/opencv.conf'
$ sudo ldconfig
$ sudo apt-get update

2.OpenCVの読み込みテスト、下記コマンドを実行してSegmentation Faultが発生しなければ正常

$ python3

Python 3.4.7 (default, Nov 19 2017, 20:51:35)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version__
'3.3.1-dev'
>>> exit()

#◆ Jupyter notebookのインストール

1.下記のコマンドを実行

$ cd ~
$ git clone https://github.com/kleinee/jns.git
$ cd jns
$ sudo chmod +x *.sh
$ sudo pip3 install jupyter;sudo pip3 install readline;sudo pip3 install ipyparallel
$ ./configure_jupyter.sh
$ sudo ./install_tex.sh;sudo ./install_stack.sh

2.Jupyterのインストール先パス確認

$ which jupyter #(1)<---- 表示されたパスは後続の作業で使用

3.Jupyter notebookの自動起動設定

$ sudo nano /etc/systemd/system/jupyter.service

※空のファイルが開かれるので下記のとおり入力して保存

[Unit]
Description=Jupyter notebook

[Service]
Type=simple
PIDFile=/var/run/jupyter-notebook.pid
ExecStart=/usr/local/bin/jupyter notebook #<--- notebookの左側のパスを(1)で置き換え
User=pi
Group=pi
WorkingDirectory=/home/pi/notebooks
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

4.下記のコマンドを実行

$ sudo systemctl start jupyter;sudo systemctl enable jupyter;sudo systemctl status jupyter
$ sudo reboot

#◆ Jupyter notebook動作確認
1.操作用PCのブラウザから「http://(Raspberry PiのIPアドレス):8888/」にアクセス
2.ログインパスワード:jns

#◆ 後始末

1.下記コマンドを実行

$ sudo apt-get autoremove
$ sudo apt-get clean
$ cd ~
$ sudo rm tensorflow-1.1.0-cp34-cp34m-linux_armv7l.whl;sudo rm Python-3.4.7.tgz

#◆ ディープラーニングによる高速複数動体検知アルゴリズム SSD(Single Shot MultiBox Detector)の導入と味見

1.下記のコマンドを実行

$ cd ~
$ git clone https://github.com/rykov8/ssd_keras.git
$ cd ssd_keras
$ nano ssd.py

2.下記のソースプログラムをファイル(ssd.py)全体に上書き貼り付けして保存

SSD.py
"""Keras implementation of SSD."""

import keras.backend as K
from keras.layers import Activation
#from keras.layers import AtrousConvolution2D
from keras.layers import Convolution2D
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import GlobalAveragePooling2D
from keras.layers import Input
from keras.layers import MaxPooling2D
#from keras.layers import merge
from keras.layers.merge import concatenate
from keras.layers import Reshape
from keras.layers import ZeroPadding2D
from keras.models import Model

from ssd_layers import Normalize
from ssd_layers import PriorBox


def SSD300(input_shape, num_classes=21):
    """SSD300 architecture.
    # Arguments
        input_shape: Shape of the input image,
            expected to be either (300, 300, 3) or (3, 300, 300)(not tested).
        num_classes: Number of classes including background.
    # References
        https://arxiv.org/abs/1512.02325
    """
    net = {}
    # Block 1
    input_tensor = input_tensor = Input(shape=input_shape)
    img_size = (input_shape[1], input_shape[0])
    net['input'] = input_tensor
    net['conv1_1'] = Convolution2D(64, (3, 3),
                                   activation='relu',
                                   padding='same',
                                   name='conv1_1')(net['input'])
    
    net['conv1_2'] = Convolution2D(64, (3, 3),
                                   activation='relu',
                                   padding='same',
                                   name='conv1_2')(net['conv1_1'])

    net['pool1'] = MaxPooling2D((2, 2), strides=(2, 2), padding='same',
                                name='pool1')(net['conv1_2'])
    # Block 2
    net['conv2_1'] = Convolution2D(128, (3, 3),
                                   activation='relu',
                                   padding='same',
                                   name='conv2_1')(net['pool1'])

    net['conv2_2'] = Convolution2D(128, (3, 3),
                                   activation='relu',
                                   padding='same',
                                   name='conv2_2')(net['conv2_1'])
    net['pool2'] = MaxPooling2D((2, 2), strides=(2, 2), padding='same',
                                name='pool2')(net['conv2_2'])

    # Block 3
    net['conv3_1'] = Convolution2D(256, (3, 3),
                                   activation='relu',
                                   padding='same',
                                   name='conv3_1')(net['pool2'])
    net['conv3_2'] = Convolution2D(256, (3, 3),
                                   activation='relu',
                                   padding='same',
                                   name='conv3_2')(net['conv3_1'])
    net['conv3_3'] = Convolution2D(256, (3, 3),
                                   activation='relu',
                                   padding='same',
                                   name='conv3_3')(net['conv3_2'])
    net['pool3'] = MaxPooling2D((2, 2), strides=(2, 2), padding='same',
                                name='pool3')(net['conv3_3'])
    # Block 4
    net['conv4_1'] = Convolution2D(512, (3, 3),
                                   activation='relu',
                                   padding='same',
                                   name='conv4_1')(net['pool3'])
    net['conv4_2'] = Convolution2D(512, (3, 3),
                                   activation='relu',
                                   padding='same',
                                   name='conv4_2')(net['conv4_1'])
    net['conv4_3'] = Convolution2D(512, (3, 3),
                                   activation='relu',
                                   padding='same',
                                   name='conv4_3')(net['conv4_2'])
    net['pool4'] = MaxPooling2D((2, 2), strides=(2, 2), padding='same',
                                name='pool4')(net['conv4_3'])
    # Block 5
    net['conv5_1'] = Convolution2D(512, (3, 3),
                                   activation='relu',
                                   padding='same',
                                   name='conv5_1')(net['pool4'])
    net['conv5_2'] = Convolution2D(512, (3, 3),
                                   activation='relu',
                                   padding='same',
                                   name='conv5_2')(net['conv5_1'])
    net['conv5_3'] = Convolution2D(512, (3, 3),
                                   activation='relu',
                                   padding='same',
                                   name='conv5_3')(net['conv5_2'])
    net['pool5'] = MaxPooling2D((3, 3), strides=(1, 1), padding='same',
                                name='pool5')(net['conv5_3'])
    # FC6
    net['fc6'] = Convolution2D(1024, (3, 3), dilation_rate=(6, 6),
                                     activation='relu', padding='same',
                                     name='fc6')(net['pool5'])
    # x = Dropout(0.5, name='drop6')(x)
    # FC7
    net['fc7'] = Convolution2D(1024, (1, 1), activation='relu',
                               padding='same', name='fc7')(net['fc6'])
    # x = Dropout(0.5, name='drop7')(x)
    # Block 6
    net['conv6_1'] = Convolution2D(256, (1, 1), activation='relu',
                                   padding='same',
                                   name='conv6_1')(net['fc7'])


    net['conv6_2'] = Convolution2D(512, (3, 3), strides=(2, 2),
                                   activation='relu', padding='same',
                                   name='conv6_2')(net['conv6_1'])
    # Block 7
    net['conv7_1'] = Convolution2D(128, (1, 1), activation='relu',
                                   padding='same',
                                   name='conv7_1')(net['conv6_2'])
    net['conv7_2'] = ZeroPadding2D()(net['conv7_1'])
    net['conv7_2'] = Convolution2D(256, (3, 3), strides=(2, 2),
                                   activation='relu', padding='valid',
                                   name='conv7_2')(net['conv7_2'])
    # Block 8
    net['conv8_1'] = Convolution2D(128, (1, 1), activation='relu',
                                   padding='same',
                                   name='conv8_1')(net['conv7_2'])
    net['conv8_2'] = Convolution2D(256, (3, 3), strides=(2, 2),
                                   activation='relu', padding='same',
                                   name='conv8_2')(net['conv8_1'])
    # Last Pool
    net['pool6'] = GlobalAveragePooling2D(name='pool6')(net['conv8_2'])
    # Prediction from conv4_3
    net['conv4_3_norm'] = Normalize(20, name='conv4_3_norm')(net['conv4_3'])
    num_priors = 3
    x = Convolution2D(num_priors * 4, (3, 3), padding='same',
                      name='conv4_3_norm_mbox_loc')(net['conv4_3_norm'])
    net['conv4_3_norm_mbox_loc'] = x
    flatten = Flatten(name='conv4_3_norm_mbox_loc_flat')
    net['conv4_3_norm_mbox_loc_flat'] = flatten(net['conv4_3_norm_mbox_loc'])
    name = 'conv4_3_norm_mbox_conf'
    if num_classes != 21:
        name += '_{}'.format(num_classes)
    x = Convolution2D(num_priors * num_classes, (3, 3), padding='same',
                      name=name)(net['conv4_3_norm'])
    net['conv4_3_norm_mbox_conf'] = x
    flatten = Flatten(name='conv4_3_norm_mbox_conf_flat')
    net['conv4_3_norm_mbox_conf_flat'] = flatten(net['conv4_3_norm_mbox_conf'])
    priorbox = PriorBox(img_size, 30.0, aspect_ratios=[2],
                        variances=[0.1, 0.1, 0.2, 0.2],
                        name='conv4_3_norm_mbox_priorbox')
    net['conv4_3_norm_mbox_priorbox'] = priorbox(net['conv4_3_norm'])
    # Prediction from fc7
    num_priors = 6
    net['fc7_mbox_loc'] = Convolution2D(num_priors * 4, (3, 3),
                                        padding='same',
                                        name='fc7_mbox_loc')(net['fc7'])
    flatten = Flatten(name='fc7_mbox_loc_flat')
    net['fc7_mbox_loc_flat'] = flatten(net['fc7_mbox_loc'])
    name = 'fc7_mbox_conf'
    if num_classes != 21:
        name += '_{}'.format(num_classes)
    net['fc7_mbox_conf'] = Convolution2D(num_priors * num_classes, (3, 3),
                                         padding='same',
                                         name=name)(net['fc7'])
    flatten = Flatten(name='fc7_mbox_conf_flat')
    net['fc7_mbox_conf_flat'] = flatten(net['fc7_mbox_conf'])
    priorbox = PriorBox(img_size, 60.0, max_size=114.0, aspect_ratios=[2, 3],
                        variances=[0.1, 0.1, 0.2, 0.2],
                        name='fc7_mbox_priorbox')
    net['fc7_mbox_priorbox'] = priorbox(net['fc7'])
    # Prediction from conv6_2
    num_priors = 6
    x = Convolution2D(num_priors * 4, (3, 3), padding='same',
                      name='conv6_2_mbox_loc')(net['conv6_2'])
    net['conv6_2_mbox_loc'] = x
    flatten = Flatten(name='conv6_2_mbox_loc_flat')
    net['conv6_2_mbox_loc_flat'] = flatten(net['conv6_2_mbox_loc'])
    name = 'conv6_2_mbox_conf'
    if num_classes != 21:
        name += '_{}'.format(num_classes)
    x = Convolution2D(num_priors * num_classes, (3, 3), padding='same',
                      name=name)(net['conv6_2'])
    net['conv6_2_mbox_conf'] = x
    flatten = Flatten(name='conv6_2_mbox_conf_flat')
    net['conv6_2_mbox_conf_flat'] = flatten(net['conv6_2_mbox_conf'])
    priorbox = PriorBox(img_size, 114.0, max_size=168.0, aspect_ratios=[2, 3],
                        variances=[0.1, 0.1, 0.2, 0.2],
                        name='conv6_2_mbox_priorbox')
    net['conv6_2_mbox_priorbox'] = priorbox(net['conv6_2'])
    # Prediction from conv7_2
    num_priors = 6
    x = Convolution2D(num_priors * 4, (3, 3), padding='same',
                      name='conv7_2_mbox_loc')(net['conv7_2'])
    net['conv7_2_mbox_loc'] = x
    flatten = Flatten(name='conv7_2_mbox_loc_flat')
    net['conv7_2_mbox_loc_flat'] = flatten(net['conv7_2_mbox_loc'])
    name = 'conv7_2_mbox_conf'
    if num_classes != 21:
        name += '_{}'.format(num_classes)
    x = Convolution2D(num_priors * num_classes, (3, 3), padding='same',
                      name=name)(net['conv7_2'])
    net['conv7_2_mbox_conf'] = x
    flatten = Flatten(name='conv7_2_mbox_conf_flat')
    net['conv7_2_mbox_conf_flat'] = flatten(net['conv7_2_mbox_conf'])
    priorbox = PriorBox(img_size, 168.0, max_size=222.0, aspect_ratios=[2, 3],
                        variances=[0.1, 0.1, 0.2, 0.2],
                        name='conv7_2_mbox_priorbox')
    net['conv7_2_mbox_priorbox'] = priorbox(net['conv7_2'])
    # Prediction from conv8_2
    num_priors = 6
    x = Convolution2D(num_priors * 4, (3, 3), padding='same',
                      name='conv8_2_mbox_loc')(net['conv8_2'])
    net['conv8_2_mbox_loc'] = x
    flatten = Flatten(name='conv8_2_mbox_loc_flat')
    net['conv8_2_mbox_loc_flat'] = flatten(net['conv8_2_mbox_loc'])
    name = 'conv8_2_mbox_conf'
    if num_classes != 21:
        name += '_{}'.format(num_classes)
    x = Convolution2D(num_priors * num_classes, (3, 3), padding='same',
                      name=name)(net['conv8_2'])
    net['conv8_2_mbox_conf'] = x
    flatten = Flatten(name='conv8_2_mbox_conf_flat')
    net['conv8_2_mbox_conf_flat'] = flatten(net['conv8_2_mbox_conf'])
    priorbox = PriorBox(img_size, 222.0, max_size=276.0, aspect_ratios=[2, 3],
                        variances=[0.1, 0.1, 0.2, 0.2],
                        name='conv8_2_mbox_priorbox')
    net['conv8_2_mbox_priorbox'] = priorbox(net['conv8_2'])
    # Prediction from pool6
    num_priors = 6
    x = Dense(num_priors * 4, name='pool6_mbox_loc_flat')(net['pool6'])
    net['pool6_mbox_loc_flat'] = x
    name = 'pool6_mbox_conf_flat'
    if num_classes != 21:
        name += '_{}'.format(num_classes)
    x = Dense(num_priors * num_classes, name=name)(net['pool6'])
    net['pool6_mbox_conf_flat'] = x
    priorbox = PriorBox(img_size, 276.0, max_size=330.0, aspect_ratios=[2, 3],
                        variances=[0.1, 0.1, 0.2, 0.2],
                        name='pool6_mbox_priorbox')
    if K.image_dim_ordering() == 'tf':
        target_shape = (1, 1, 256)
    else:
        target_shape = (256, 1, 1)
    net['pool6_reshaped'] = Reshape(target_shape,
                                    name='pool6_reshaped')(net['pool6'])
    net['pool6_mbox_priorbox'] = priorbox(net['pool6_reshaped'])
    # Gather all predictions
    net['mbox_loc'] = concatenate([net['conv4_3_norm_mbox_loc_flat'],
                             net['fc7_mbox_loc_flat'],
                             net['conv6_2_mbox_loc_flat'],
                             net['conv7_2_mbox_loc_flat'],
                             net['conv8_2_mbox_loc_flat'],
                             net['pool6_mbox_loc_flat']],
                            axis=1,
                                  name='mbox_loc')
    net['mbox_conf'] = concatenate([net['conv4_3_norm_mbox_conf_flat'],
                              net['fc7_mbox_conf_flat'],
                              net['conv6_2_mbox_conf_flat'],
                              net['conv7_2_mbox_conf_flat'],
                              net['conv8_2_mbox_conf_flat'],
                              net['pool6_mbox_conf_flat']],
                              axis=1,
                              name='mbox_conf')
    net['mbox_priorbox'] = concatenate([net['conv4_3_norm_mbox_priorbox'],
                                  net['fc7_mbox_priorbox'],
                                  net['conv6_2_mbox_priorbox'],
                                  net['conv7_2_mbox_priorbox'],
                                  net['conv8_2_mbox_priorbox'],
                                  net['pool6_mbox_priorbox']],
                                 axis=1,
                                 name='mbox_priorbox')
    if hasattr(net['mbox_loc'], '_keras_shape'):
        num_boxes = net['mbox_loc']._keras_shape[-1] // 4
    elif hasattr(net['mbox_loc'], 'int_shape'):
        num_boxes = K.int_shape(net['mbox_loc'])[-1] // 4
    net['mbox_loc'] = Reshape((num_boxes, 4),
                              name='mbox_loc_final')(net['mbox_loc'])
    net['mbox_conf'] = Reshape((num_boxes, num_classes),
                               name='mbox_conf_logits')(net['mbox_conf'])
    net['mbox_conf'] = Activation('softmax',
                                  name='mbox_conf_final')(net['mbox_conf'])
    net['predictions'] = concatenate([net['mbox_loc'],
                               net['mbox_conf'],
                               net['mbox_priorbox']],
                               axis=2,
                               #axis = 0,
                               name='predictions')
    model = Model(net['input'], net['predictions'])
    return model

3.下記のコマンドを実行

$ nano ssd_layers.py

4.下記のように編集して保存する(Keras1.x から Keras2.x へのアップグレード対応)

#def get_output_shape_for(self, input_shape):
def compute_output_shape(self, input_shape):

5.下記Webサイトから学習済みモデル(weights_SSD300.hdf5)をダウンロードし、ssd_kerasフォルダ直下に配置
 https://mega.nz/#F!7RowVLCL!q3cEVRK9jyOSB9el3SssIA

6.バグコードを修正するため下記コマンドを実行

$ cd testing_utils
$ nano videotest.py

7.下記の通り編集して保存

#vidw = vid.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH)
#vidh = vid.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)
vidw = vid.get(cv2.CAP_PROP_FRAME_WIDTH)
vidh = vid.get(cv2.CAP_PROP_FRAME_HEIGHT)

8.動体検知対象の動画ファイル指定

$ nano videotest_example.py
vid_test.run('path/to/your/video.mkv') #←ココを任意の動画ファイルのパスに変更する、USB-Webカメラでテストする場合は引数を消すか、WebカメラのDeviceIDを入力する

9.下記コマンドを実行

$ cd ~/ssd_keras
$ nano ssd_layers.py

10.下記の通り修正して保存

#    def compute_output_shape(self, input_shape):
#        num_priors_ = len(self.aspect_ratios)
#        layer_width = input_shape[self.waxis]
#        layer_height = input_shape[self.haxis]
#        num_boxes = num_priors_ * layer_width * layer_height
#        return (input_shape[0], num_boxes, 8)
    def compute_output_shape(self, input_shape):
        num_priors = len(self.aspect_ratios)
        layer_width = input_shape[self.waxis]
        layer_height = input_shape[self.haxis]
        num_boxes = num_priors * layer_width * layer_height
        return (input_shape[0], num_boxes, 8)
#        num_priors_ = len(self.aspect_ratios)
#        prior_boxes = np.concatenate((centers_x, centers_y), axis=1)
#        prior_boxes = np.tile(prior_boxes, (1, 2 * num_priors_))
        num_priors = len(self.aspect_ratios)
        prior_boxes = np.concatenate((centers_x, centers_y), axis=1)
        prior_boxes = np.tile(prior_boxes, (1, 2 * num_priors))

11.SSD動体検知の実行テスト 【注意】テスト実行前にRaspberry Pi のGUIモードへ切り替える必要あり

$ startx #←GUIモードへの切り替え
$ python3 videotest_example.py #←「端末アプリ上で実行」

#◆ TMP領域の縮小
1.下記のコマンドを実行

$ cd /etc
$ sudo nano /etc/fstab

2.ファイルの下記の箇所を変更して保存

tmpfs /tmp tmpfs defaults,size=32m,noatime,mode=1777 0 0
tmpfs /var/tmp tmpfs defaults,size=32m,noatime,mode=1777 0 0

3.下記のコマンドを実行

$ sudo reboot

#◆ SDカード長寿命化のためSWAPを無効化
1.下記のコマンドを実行

$ free
$ sudo swapoff --all
$ sudo apt-get remove dphys-swapfile
$ sudo reboot

#◆ Python3 コマンドを Python コマンドへ誘導する設定
1.下記のコマンドを実行

$ sudo nano ~/.bashrc

2.下記一行をファイルの一番下に追記

alias python=python3

3.下記コマンドを実行

$ source ~/.bashrc

4.下記コマンドを実行、PythonコマンドでPython3が起動するようになる

$ python
Python 3.4.7 (default, Dec  5 2017, 01:51:58)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>

#◆ おまけ
実行時にSWAP領域が不足した場合は下記実施

$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=1024

$ sudo /etc/init.d/dphys-swapfile restart swapon -s
4
7
2

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
4
7

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?