22

I use custom model for classification in Tensor flow Camera Demo. I generated a .pb file (serialized protobuf file) and I could display the huge graph it contains. To convert this graph to a optimized graph, as given in [https://www.oreilly.com/learning/tensorflow-on-android], the following procedure could be used:

$ bazel-bin/tensorflow/python/tools/optimize_for_inference  \
--input=tf_files/retrained_graph.pb \
--output=tensorflow/examples/android/assets/retrained_graph.pb
--input_names=Mul \
--output_names=final_result

Here how to find the input_names and output_names from the graph display. When I dont use proper names, I get device crash:

E/TensorFlowInferenceInterface(16821): Failed to run TensorFlow inference 
with inputs:[AvgPool], outputs:[predictions]

E/AndroidRuntime(16821): FATAL EXCEPTION: inference

E/AndroidRuntime(16821): java.lang.IllegalArgumentException: Incompatible 
shapes: [1,224,224,3] vs. [32,1,1,2048]

E/AndroidRuntime(16821):     [[Node: dropout/dropout/mul = Mul[T=DT_FLOAT, 
_device="/job:localhost/replica:0/task:0/cpu:0"](dropout/dropout/div, 
dropout/dropout/Floor)]]
7
  • Hi @Dr.SantleCamilus , Did you got the solution? Commented Aug 28, 2017 at 6:53
  • 1
    yes, mention of proper input and output node names are essential for the android TF demo to work. Some older TF training code may not include these names to the model. Presence of node names could be found by below answer by JP Kim. If no names are present, it is needed to migrate to new TF training code to include proper node names. Commented Aug 28, 2017 at 9:55
  • I am getting the output like this *[u'image_tensor=>Placeholder'] * Commented Aug 28, 2017 at 10:26
  • 1
    [u'image_tensor=>Placeholder'] means that your input node name is ''image_tensor" ( / you can use --input_names=image_tensor while defining optimize_for_interface ) Commented Aug 29, 2017 at 8:50
  • 1
    Please check for presence of softmax node in your model using the below answer by JP Kim. If it returns any, please use the same name for output name. Output name is the specific node which generate the output of the CNN network. Commented Aug 30, 2017 at 6:13

3 Answers 3

23

Try this:

run python

>>> import tensorflow as tf
>>> gf = tf.GraphDef()
>>> gf.ParseFromString(open('/your/path/to/graphname.pb','rb').read())

and then

>>> [n.name + '=>' +  n.op for n in gf.node if n.op in ( 'Softmax','Placeholder')]

Then, you can get result similar to this:

['Mul=>Placeholder', 'final_result=>Softmax']

But I'm not sure it's the problem of node names regarding the error messages. I guess you provided wrong arguements when loading the graph file or your generated graph file is something wrong?

Check this part:

E/AndroidRuntime(16821): java.lang.IllegalArgumentException: Incompatible 
shapes: [1,224,224,3] vs. [32,1,1,2048]

UPDATE: Sorry, if you're using (re)trained graph , then try this:

[n.name + '=>' +  n.op for n in gf.node if n.op in ( 'Softmax','Mul')]

It seems that (re)trained graph saves input/output op name as "Mul" and "Softmax", while optimized and/or quantized graph saves them as "Placeholder" and "Softmax".

BTW, using retrained graph in mobile environment is not recommended according to Peter Warden's post: https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/ . It's better to use quantized or memmapped graph due to performance and file size issue, I couldn't find out how to load memmapped graph in android though...:( (no problem loading optimized / quantized graph in android)

Sign up to request clarification or add additional context in comments.

10 Comments

When I execute the comment for my custom model: [n.name + '=>' + n.op for n in input_graph_def.node if n.op in ( 'Softmax','Placeholder')], I get [u'tower_0/logits/predictions=>Softmax'], The output layer name is displayed while the input layer name is not present. I cant understand where things go wrong.
@Dr.SantleCamilus , I think the reason you get error while loading the graph file is you tried to load a graph not optimized for mobile. You should not use pb file from retrained output. It has Djpeg issue on mobile. So just convert it using optimized_for_inference and/or quantize_graph. Both are fine but quantized graph is better.
output of [n.name + '=>' + n.op for n in gf.node if n.op in ( 'Softmax','Placeholder')] after optimized_for_inference or quantize_graph or transform_graph operation is [u'tower_0/logits/predictions=>Softmax'].
output of [n.name + '=>' + n.op for n in gf.node if n.op in ( 'Softmax','Mul')] after optimized_for_inference or quantize_graph [u'tower_0/conv0/BatchNorm/moments/normalize/shifted_mean=>Mul', u'tower_0/conv0/BatchNorm/moments/normalize/Mul=>Mul', ........... u'tower_0/mixed_8x8x2048b/branch_pool/Conv/BatchNorm/batchnorm/mul=>Mul', u'tower_0/mixed_8x8x2048b/branch_pool/Conv/BatchNorm/batchnorm/mul_1=>Mul', u'tower_0/logits/dropout/dropout/random_uniform/mul=>Mul', u'tower_0/logits/dropout/dropout/mul=>Mul', u'tower_0/logits/predictions=>Softmax']
The history goes here: The tensorflow models are created using the inception V3 arch.: github.com/tensorflow/models/tree/master/inception The models are saved in check point (ckpt) format (.meta, .index and .data). The model is converted into a .pb file to port to the tensor flow camera demo (github.com/tensorflow/tensorflow/blob/master/tensorflow/…)
|
10

Recently I came across this option directly from tensorflow:

bazel build tensorflow/tools/graph_transforms:summarize_graph    
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph
--in_graph=custom_graph_name.pb

Comments

8

I wrote a simple script to analyze the dependency relations in a computational graph (usually a DAG, directly acyclic graph). It's so obvious that the inputs are the nodes that lack a input. However, outputs can be defined as any nodes in a graph because, in the weirdest but still valid case, outputs can be inputs while the other nodes are all dummy. I still define the output operations as nodes without output in the code. You could neglect it at your willing.

import tensorflow as tf

def load_graph(frozen_graph_filename):
    with tf.io.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.compat.v1.GraphDef()
        graph_def.ParseFromString(f.read())
    with tf.Graph().as_default() as graph:
        tf.import_graph_def(graph_def)
    return graph

def analyze_inputs_outputs(graph):
    ops = graph.get_operations()
    outputs_set = set(ops)
    inputs = []
    for op in ops:
        if len(op.inputs) == 0 and op.type != 'Const':
            inputs.append(op)
        else:
            for input_tensor in op.inputs:
                if input_tensor.op in outputs_set:
                    outputs_set.remove(input_tensor.op)
    outputs = list(outputs_set)
    return (inputs, outputs)

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.