Skip to content

Exporting for TensorFlow Serving

Grant Van Horn edited this page Mar 21, 2017 · 3 revisions

This is a tutorial on exporting a trained model to use with TensorFlow Serving. If you are looking to export an optimized graph definition in a protocol buffer format (e.g. to use on phones) check out this tutorial.

Install TensorFlow Serving

TensorFlow serving works well on Ubuntu 14.04+. There is currently no support for Mac OSX unless you want to use docker. This is an open issue that you can follow here.

You can follow the TensorFlow Serving installation instructions here. I have also put together some notes for Ubuntu 14.04 here.

Export your model

Export your trained model with the following script:

CUDA_VISIBLE_DEVICES=1 python export.py \
--checkpoint_path $EXPERIMENT_DIR/logdir \
--export_dir $EXPERIMENT_DIR/export \
--export_version 1 \
--config $EXPERIMENT_DIR/config_test.yaml \
--serving \
--do_preprocess

Compile and run the model

From the TenorFlow Serving repo:

$ bazel build //tensorflow_serving/model_servers:tensorflow_model_server
$ bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000  --model_base_path=$EXPERIMENT_DIR/export/ --model_name=inception

Compile the client and classify an image

From the TensorFlow Serving repo:

$ bazel build //tensorflow_serving/example:inception_client

This will create a python file at bazel-bin/tensorflow_serving/example/inception_client. For me, this file hard codes the PYTHON_BINARY path to the system install of python. If you are using a virtualenv then you should change this to point at that python binary.

Now we can classify an image:

$ bazel-bin/tensorflow_serving/example/inception_client --image /path/to/image.jpg