Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support exporting models #105

Open
iislucas opened this issue Jun 29, 2018 · 0 comments
Open

Support exporting models #105

iislucas opened this issue Jun 29, 2018 · 0 comments

Comments

@iislucas
Copy link
Contributor

Estimator uses the most recent model by default, see: https://www.tensorflow.org/get_started/checkpoints ; Note that while checkpoints store model weights, the whole graph + weights (aka models) can be restored - this looks like the right abstraction, and may obviate the need for build_parsing_serving_input_receiver_fn, which exports a model that takes TF.Example proto as input.

Something like (Thanks to @dborkan for the pointers!):

feature_spec = {  'sentence': tf.FixedLenFeature(dtype=tf.string, shape=1)}
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
# Note: `estimator` below is an instance of the TF Estimator class.estimator.export_savedmodel(<destination_directory>, serving_input_fn)

This seem to fit naturally into the base_model.py abstraction. To be figured out: what's the right way to specify the appropriate checkpoint to use?

ipavlopoulos pushed a commit to ipavlopoulos/conversationai-models that referenced this issue Mar 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant