7
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

How to convert Tensorflow Model to Core ML Model

Last updated at Posted at 2018-05-31

#INTRODUCTION
Today, What are we going to cover is How can we convert Tensorflow model to Core ML model and since swift doesn't have library to convert tensorflow model to Core ML model to so we going to use third party library. First, please download the sample Model.

#GETTING STARTED

###1. Setting up

  1. Download the SampleModel
  2. Go to this Github Repository and follow the installation guide to install the library.

###2. It's Time to Fire Up
Ok. Now everything is all set. Now let create our python project. and write below code.

import tensorflow as tf
import tfcoreml
from coremltools.proto import FeatureTypes_pb2 as _FeatureTypes_pb2
import coremltools

"""FIND GRAPH INFO"""
tf_model_path = "./retrained_graph.pb"
with open(tf_model_path , 'rb') as f:
    serialized = f.read()
tf.reset_default_graph()
original_gdef = tf.GraphDef()
original_gdef.ParseFromString(serialized)

with tf.Graph().as_default() as g:
    tf.import_graph_def(original_gdef, name ='')
    ops = g.get_operations()
    N = len(ops)
    for i in [0,1,2,N-3,N-2,N-1]:
        print('\n\nop id {} : op type: "{}"'.format(str(i), ops[i].type))
        print('input(s):')
        for x in ops[i].inputs:
            print("name = {}, shape: {}, ".format(x.name, x.get_shape()))
        print('\noutput(s):'),
        for x in ops[i].outputs:
            print("name = {}, shape: {},".format(x.name, x.get_shape()))

So in above code we are looping through the graphDef to find the information because in order to convert TF Model to Core ML Model we need to know some informations. We need to know the about:

  1. Input Name
  2. Output Name
  3. Model Shape

So after you run the program you will see:

  1. Input Name : is the output of the Placeholder op which is ("input:0")
  2. Output Name : is the output of Softmax op towards the end of the graph which is ("final_result:0").
  3. Model Shape : for model shape we use the tensorboard to find it shape during the model creation or you could use tf.shape()to find its shape. Our model its shape is [1,224,224,3].
    Ok now Lets convert to mlmodel.
""" CONVERT TF TO CORE ML """
# Model Shape
input_tensor_shapes = {"input:0":[1,224,224,3]} 
# Input Name
image_input_name = ['input:0']
# Output CoreML model path
coreml_model_file = './myModel.mlmodel'
# Output name
output_tensor_names = ['final_result:0']
# Label file for classification
class_labels = 'retrained_labels.txt'

#Convert Process
coreml_model = tfcoreml.convert(
        tf_model_path=tf_model_path,
        mlmodel_path=coreml_model_file,
        input_name_shape_dict=input_tensor_shapes,
        output_feature_names=output_tensor_names,
        image_input_names = image_input_name,
        class_labels = class_labels)

Yeah now We have successfully convert TF Model into Core ML Model but wait We still have to improve our model, why? because we need it to classify the image accuracy as high as possible. Let me explain some basic thing about working with image on neural network. With neural network before we pass the image to it, we need to process the image correctly. This is always a crucial step when using neural networks on images.
"So what about Core ML?"
CoreML automatically handles the image preprocessing, when the input is type image, so to improve we just need to scale it and do some image biases.

# Get image pre-processing parameters of a saved CoreML model
spec = coremltools.models.utils.load_spec(coreml_model_file)
if spec.WhichOneof('Type') == 'neuralNetworkClassifier':
  nn = spec.neuralNetworkClassifier
if spec.WhichOneof('Type') == 'neuralNetwork':
  nn = spec.neuralNetwork  
if spec.WhichOneof('Type') == 'neuralNetworkRegressor':
  nn = spec.neuralNetworkRegressor

preprocessing = nn.preprocessing[0].scaler
print 'channel scale: ', preprocessing.channelScale
print 'blue bias: ', preprocessing.blueBias
print 'green bias: ', preprocessing.greenBias
print 'red bias: ', preprocessing.redBias

inp = spec.description.input[0]
if inp.type.WhichOneof('Type') == 'imageType':
  colorspace = _FeatureTypes_pb2.ImageFeatureType.ColorSpace.Name(inp.type.imageType.colorSpace)
  print 'colorspace: ', colorspace

For above code :

  1. First , We get an image pre-processing param of a save CoreML Model.
  2. Second , We are checking what is the type of the model.
  3. Third , After it finished checked process it will scale the image and do the image bias.

Ok now , let us convert our CoreML again along with the scale and bias.

coreml_model = tfcoreml.convert(
        tf_model_path=tf_model_path,
        mlmodel_path=coreml_model_file,
        input_name_shape_dict=input_tensor_shapes,
        output_feature_names=output_tensor_names,
        image_input_names = image_input_name,
        class_labels = class_labels,
        red_bias = -1,
        green_bias = -1,
        blue_bias = -1,
        image_scale = 2.0/255.0)

Yes! That's it. Now we can use it in our Swift Project.

7
3
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
7
3

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?