LoginSignup
1
0

More than 5 years have passed since last update.

Implementation of MovileNetv2-SSDLite (Pascal VOC, ReLU6 layer enabled) by Caffe

Last updated at Posted at 2019-03-20

MobileNetv2-SSDLite GitHub stars1

Environment

  • Ubuntu 16.04
  • CUDA 9.0
  • cuDNN 7
  • caffe-ssd
  • Tensorflow-GPU v1.12.0
  • [Docker] nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04 here

Procedure

Advance_preparation
$ sudo apt-get update;sudo apt-get upgrade -y
$ sudo apt-get install -y autoconf automake libtool curl make g++ unzip wget git nano
$ sudo apt-get install -y --no-install-recommends libboost-all-dev libopenblas-dev libsnappy-dev
$ sudo apt-get install -y libgflags-dev libgoogle-glog-dev liblmdb-dev libleveldb-dev
$ sudo apt-get install -y libhdf5-10 libhdf5-serial-dev libhdf5-dev libhdf5-cpp-11
$ sudo apt-get install -y python3-dev python3-numpy python3-skimage gfortran libturbojpeg
$ sudo apt-get install -y python-dev python-numpy python-skimage
$ sudo apt-get install -y python3-pip python-pip
$ sudo -H pip install pip --upgrade
$ sudo -H pip3 install pip --upgrade
$ sudo -H pip install opencv-python
$ sudo -H pip3 install opencv-python

$ cd ~
$ wget https://github.com/protocolbuffers/protobuf/releases/download/v2.6.1/protobuf-2.6.1.zip
$ unzip protobuf-2.6.1.zip;rm protobuf-2.6.1.zip
$ cd protobuf-2.6.1
$ ./autogen.sh
$ ./configure
$ make
$ sudo make install
$ sudo ldconfig
Custom_Caffe_Clone
$ cd ~
$ git clone https://github.com/chuanqi305/ssd.git
$ cd ssd
$ cp Makefile.config.example Makefile.config
$ nano Makefile.config
Edit_Makefile.config
## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1

# uncomment to disable IO dependencies and corresponding data layers
USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
#   You should not set this flag if you will be reading LMDBs with any
#   possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the lines after *_35 for compatibility.
CUDA_ARCH := -gencode arch=compute_30,code=sm_30 \
             -gencode arch=compute_35,code=sm_35 \
             -gencode arch=compute_50,code=sm_50 \
             -gencode arch=compute_52,code=sm_52 \
             -gencode arch=compute_61,code=sm_61

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
# BLAS := atlas
BLAS := open
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
#PYTHON_INCLUDE := /usr/include/python2.7 \
#       /usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda2
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
        $(ANACONDA_HOME)/include/python2.7 \
        $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \

# Uncomment to use Python 3 (default is Python 2)
PYTHON_LIBRARIES := boost_python-py35 python3.5m
PYTHON_INCLUDE := /usr/include/python3.5m \
                 /usr/lib/python3.5/dist-packages/numpy/core/include \
                 /usr/local/lib/python3.5/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @
Custom_Caffe_Build
$ sudo -H pip install -r python/requirements.txt
$ sudo -H pip3 install -r python/requirements.txt
$ rm -rf .build_release
$ rm -rf build
$ make all -j8
$ make pycaffe
$ export PYTHONPATH=/<caffe path>/python/:$PYTHONPATH
Download_MobileNetv2-SSDLite_COCO_Model
$ cd ~
$ git clone https://github.com/chuanqi305/MobileNetv2-SSDLite.git
$ cd ~/MobileNetv2-SSDLite/ssdlite
$ wget http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz
$ tar -zxvf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz
$ rm ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz
Edit_weight_output_program
$ cd ~/MobileNetv2-SSDLite/ssdlite
$ nano dump_tensorflow_weights.py
dump_tensorflow_weights.py
import tensorflow as tf
import cv2
import numpy as np
import os

def graph_create(graphpath):
    with tf.gfile.FastGFile(graphpath, 'rb') as graphfile:
        graphdef = tf.GraphDef()
        graphdef.ParseFromString(graphfile.read())

        return tf.import_graph_def(graphdef, name='',return_elements=[])

graph_create("ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb")
save_dir = ('output')
if not os.path.exists(save_dir):
    os.mkdir(save_dir)

with tf.Session() as sess:
    tensors = [tensor for tensor in tf.get_default_graph().as_graph_def().node]
    for t in tensors:
      if ('weights' in t.name \
             or 'bias' in t.name \
             or 'moving_variance' in t.name \
             or 'moving_mean' in t.name \
             or 'beta' in t.name \
             or 'gamma' in t.name \
             or 'BatchNorm/batchnorm/sub' in t.name \
             or 'BatchNorm/batchnorm/mul' in t.name) \
             and ('sub' in t.name or 'mul/' in t.name or 'read' in t.name): 
         ts = tf.get_default_graph().get_tensor_by_name(t.name + ":0")
         data = ts.eval()
         #print(t.name)
         names = t.name.split('/')
         layer_name = None
         paratype = None
         if 'BatchNorm/batchnorm/sub' in t.name or 'biases' in t.name:
              paratype = 'biases'
         elif 'BatchNorm/batchnorm/mul' in t.name:
              paratype = 'weights_scale'
         elif 'weights' in t.name:
              paratype = 'weights'
         elif 'moving_variance' in t.name:
              paratype = 'bn_moving_variance'
         elif 'moving_mean' in t.name:
              paratype = 'bn_moving_mean'
         elif 'beta' in t.name:
              paratype = 'beta'
         elif 'gamma' in t.name:
              paratype = 'gamma'

         if names[2] == 'Conv' or names[2] == 'Conv_1':
              layer_name = names[2]
         elif 'expanded_conv' in names[2]:
              layer_name = names[2].replace('expanded_', '') + '_' + names[3]
         elif  'layer_19' in names[2]:
              substr = names[2].split('_')
              layer_name = 'layer_19_' + substr[2] + '_' + substr[4]
              if 'depthwise' in names[2]:
                  layer_name += "_depthwise"
         elif  'BoxPredictor' in names[0]:
              layer_name = names[0] + '_' + names[1]
         output_name = layer_name + "_"  + paratype
         print(output_name)
         #print ts.get_shape()

         if len(data.shape) == 4:
             caffe_weights = data.transpose(3, 2, 0, 1)
             origin_shape = caffe_weights.shape
             boxes = 0
             if 'depthwise' not in output_name:
                 if output_name.find('BoxEncodingPredictor') != -1:
                     boxes = caffe_weights.shape[0] // 4
                 elif output_name.find('ClassPredictor') != -1:
                     boxes = caffe_weights.shape[0] // 91

                 if output_name.find('BoxEncodingPredictor') != -1:
                     tmp = caffe_weights.reshape(boxes, 4, -1).copy()
                     new_weights = np.zeros(tmp.shape, dtype=np.float32)
                     #tf order:    [y, x, h, w]
                     #caffe order: [x, y, w, h]
                     if 'BoxPredictor_0/BoxEncodingPredictor/weights' in t.name:
                         #caffe first box layer [(0.2, 1.0), (0.2, 2.0), (0.2, 0.5)]
                         #tf first box layer    [(0.1, 1.0), (0.2, 2.0), (0.2, 0.5)]
                         #adjust the box by weights and bias change
                         new_weights[:, 0] = tmp[:, 1] * 0.5
                         new_weights[:, 1] = tmp[:, 0] * 0.5
                     else:
                         new_weights[:, 0] = tmp[:, 1]
                         new_weights[:, 1] = tmp[:, 0]
                     new_weights[:, 2] = tmp[:, 3]
                     new_weights[:, 3] = tmp[:, 2]
                     caffe_weights = new_weights.reshape(origin_shape).copy()
                 if output_name.find('BoxEncodingPredictor') != -1 or \
                     output_name.find('ClassPredictor') != -1:
                     tmp = caffe_weights.reshape(boxes, -1).copy()
                     new_weights = np.zeros(tmp.shape, dtype=np.float32)
                     #tf aspect ratio:   [1, 2, 3, 0.5, 0.333333333, 1]
                     #caffe aspect ratio:[1, 1, 2, 3, 0.5, 0.333333333]
                     if boxes == 6:
                         new_weights[0] = tmp[0]
                         new_weights[1] = tmp[5]
                         new_weights[2] = tmp[1]
                         new_weights[3] = tmp[2]
                         new_weights[4] = tmp[3]
                         new_weights[5] = tmp[4]
                         caffe_weights = new_weights.reshape(origin_shape).copy()
             caffe_weights.tofile(os.path.join(save_dir, output_name + '.dat'))
             print(caffe_weights.shape)
         else:
             caffe_bias = data
             boxes = 0
             if 'depthwise' not in output_name:
                 if output_name.find('BoxEncodingPredictor') != -1:
                     boxes = caffe_bias.shape[0] // 4
                 elif output_name.find('ClassPredictor') != -1:
                     boxes = caffe_bias.shape[0] // 91
                 if output_name.find('BoxEncodingPredictor') != -1:
                     tmp = caffe_bias.reshape(boxes, 4).copy()
                     new_bias = np.zeros(tmp.shape, dtype=np.float32)
                     new_bias[:, 0] = tmp[:, 1]
                     new_bias[:, 1] = tmp[:, 0]
                     new_bias[:, 2] = tmp[:, 3]
                     new_bias[:, 3] = tmp[:, 2]
                     caffe_bias = new_bias.flatten().copy()

                 if output_name.find('BoxEncodingPredictor') != -1 or \
                     output_name.find('ClassPredictor') != -1:
                     tmp = caffe_bias.reshape(boxes, -1).copy()
                     new_bias = np.zeros(tmp.shape, dtype=np.float32)
                     if boxes == 6:
                         new_bias[0] = tmp[0]
                         new_bias[1] = tmp[5]
                         new_bias[2] = tmp[1]
                         new_bias[3] = tmp[2]
                         new_bias[4] = tmp[3]
                         new_bias[5] = tmp[4]
                         caffe_bias = new_bias.flatten()
                     elif 'BoxPredictor_0/BoxEncodingPredictor/biases' in t.name:
                         #caffe first box layer [(0.2, 1.0), (0.2, 2.0), (0.2, 0.5)]
                         #tf first box layer    [(0.1, 1.0), (0.2, 2.0), (0.2, 0.5)]
                         #adjust the box by weights and bias change
                         new_bias[0,:2] = tmp[0,:2] * 0.5
                         new_bias[0,2] = tmp[0,2] + (np.log(0.5) / 0.2)
                         new_bias[0,3] = tmp[0,3] + (np.log(0.5) / 0.2)
                         new_bias[1] = tmp[1]
                         new_bias[2] = tmp[2]
                         caffe_bias = new_bias.flatten()
                 print(caffe_bias.shape)
             caffe_bias.tofile(os.path.join(save_dir, output_name + '.dat'))
Edit_weight_conversion_program
$ sudo -H pip3 install tensorflow-gpu==1.12.0 --upgrade
$ sudo -H pip3 install numpy --upgrade
$ python3 dump_tensorflow_weights.py
$ cp load_caffe_weights.py BK_load_caffe_weights.py
$ nano load_caffe_weights.py
Before
#Before editing Load_caffe_weights.py

caffe_root = '/home/yaochuanqi/work/ssd/caffe/'
for key in net.params.iterkeys():
print key
After
#After editing load_caffe_weights.py

caffe_root = '/path/to/your/ssd-caffe/'
[ex] caffe_root = '/ssd/'

for key in net.params.keys():
print(key)
load_caffe_weights.py
import numpy as np
import sys,os
caffe_root = '/ssd/'
sys.path.insert(0, caffe_root + 'python')
import caffe

deploy_proto = 'deploy.prototxt'
save_model = 'deploy.caffemodel'

weights_dir = 'output'
box_layers = ['conv_13/expand', 'Conv_1', 'layer_19_2_2', 'layer_19_2_3', 'layer_19_2_4', 'layer_19_2_5']
def load_weights(path, shape=None):
    weights = None
    if shape is None: 
        weights = np.fromfile(path, dtype=np.float32)
    else:
        weights = np.fromfile(path, dtype=np.float32).reshape(shape)
    os.unlink(path)
    return weights

def load_data(net):
    #for key in net.params.iterkeys():
    for key in net.params.keys():
        if type(net.params[key]) is caffe._caffe.BlobVec:
            print(key)
            if 'mbox' not in key and (key.startswith("conv") or key.startswith("Conv") or key.startswith("layer")):
                print('conv')
                if key.endswith("/bn"):
                    prefix = weights_dir + '/' + key.replace('/', '_')
                    net.params[key][0].data[...] = load_weights(prefix + '_moving_mean.dat')
                    net.params[key][1].data[...] = load_weights(prefix + '_moving_variance.dat')
                    net.params[key][2].data[...] = np.ones(net.params[key][2].data.shape, dtype=np.float32)
                elif key.endswith("/scale"):
                    prefix = weights_dir + '/' + key.replace('scale','bn').replace('/', '_')
                    net.params[key][0].data[...] = load_weights(prefix + '_gamma.dat')
                    net.params[key][1].data[...] = load_weights(prefix + '_beta.dat')
                else:
                    prefix = weights_dir + '/' + key.replace('/', '_')
                    ws = np.ones((net.params[key][0].data.shape[0], 1, 1, 1), dtype=np.float32)
                    if os.path.exists(prefix + '_weights_scale.dat'):
                        ws = load_weights(prefix + '_weights_scale.dat', ws.shape)
                    net.params[key][0].data[...] = load_weights(prefix + '_weights.dat', net.params[key][0].data.shape) * ws
                    if len(net.params[key]) > 1:
                        net.params[key][1].data[...] = load_weights(prefix + '_biases.dat')

            elif 'mbox_loc/depthwise' in key or 'mbox_conf/depthwise' in key:
                prefix = key[0:key.find('_mbox')]
                index = box_layers.index(prefix)
                if 'mbox_loc' in key:
                    prefix = weights_dir + '/BoxPredictor_' + str(index) + '_BoxEncodingPredictor_depthwise'
                else:
                    prefix = weights_dir + '/BoxPredictor_' + str(index) + '_ClassPredictor_depthwise'
                if key.endswith("/bn"):
                    net.params[key][0].data[...] = load_weights(prefix + '_bn_moving_mean.dat')
                    net.params[key][1].data[...] = load_weights(prefix + '_bn_moving_variance.dat')
                    net.params[key][2].data[...] = np.ones(net.params[key][2].data.shape, dtype=np.float32)
                elif key.endswith("/scale"):
                    net.params[key][0].data[...] = load_weights(prefix + '_gamma.dat')
                    net.params[key][1].data[...] = load_weights(prefix + '_beta.dat')
                else:
                    print(key)
                    net.params[key][0].data[...] = load_weights(prefix + '_weights.dat', net.params[key][0].data.shape)
                    if len(net.params[key]) > 1:
                        net.params[key][1].data[...] = load_weights(prefix + '_biases.dat')
            elif key.endswith("mbox_loc"):
                prefix = key.replace("_mbox_loc", "")
                index = box_layers.index(prefix)
                prefix = weights_dir + '/BoxPredictor_' + str(index) + '_BoxEncodingPredictor'
                net.params[key][0].data[...] = load_weights(prefix + '_weights.dat', net.params[key][0].data.shape)
                net.params[key][1].data[...] = load_weights(prefix + '_biases.dat')
            elif key.endswith("mbox_conf"):
                prefix = key.replace("_mbox_conf", "")
                index = box_layers.index(prefix)
                prefix = weights_dir + '/BoxPredictor_' + str(index) + '_ClassPredictor'
                net.params[key][0].data[...] = load_weights(prefix + '_weights.dat', net.params[key][0].data.shape)
                net.params[key][1].data[...] = load_weights(prefix + '_biases.dat')
            else:
                print("error key " + key)

net_deploy = caffe.Net(deploy_proto, caffe.TEST)

load_data(net_deploy)
net_deploy.save(save_model)
Weight_conversion_(Tensorflow->Caffe)
$ python3 load_caffe_weights.py
Editing_MS-COCO_to_Pascal_VOC_conversion_program
$ cp coco2voc.py BK_coco2voc.py
$ nano coco2voc.py
Before
#Before editing coco2voc.py

caffe_root = '/home/yaochuanqi/work/ssd/caffe/'
for key in net.params.iterkeys():
x = wt.shape[0] / 91
After
#After editing coco2voc.py

caffe_root = '/path/to/your/ssd-caffe/'
[ex] caffe_root = '/opt/movidius/ssd-caffe/'

for key in net.params.keys():
x = int(wt.shape[0] / 91)
MS-COCO_to_Pascal_VOC_conversion
$ python3 coco2voc.py

Finally, Learning data of the voc model is generated.

Destination_path
ssdlite/deploy_voc.caffemodel
ssdlite/voc/deploy.prototxt
1
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
1
0