Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with get_operation_by_name('finetune/init_all_tables') #11

Open
germanebr opened this issue Nov 1, 2021 · 1 comment
Open

Comments

@germanebr
Copy link

I've been trying to run the code on Google Colab and my local computer, both using Tensorlow 1.15.

When trying to graph the sentences before fine-tuning, the following error code:
The name 'finetune/init_all_tables' refers to an Operation not in the graph.

I downloaded the model directly from the hub and imported it on a new folder:
`from tensorflow.python.saved_model import tag_constants

scope = 'finetune'

graph=tf.Graph()

with tf.Session(graph=graph) as sess:
model_path = 'D:/Users/GermanEBR/Glite/ITESM/DCI/Tesis/Databases/USE/universal-sentence-encoder-4'
tf.saved_model.loader.load(sess, [tag_constants.SERVING], model_path)

sess.run(tf.global_variables_initializer())
sess.run(tf.get_default_graph().get_operation_by_name('finetune/init_all_tables'))

in_tensor = tf.get_default_graph().get_tensor_by_name(scope + '/module/fed_input_values:0')
ou_tensor = tf.get_default_graph().get_tensor_by_name(scope + '/module/Encoder_en/hidden_layers/l2_normalize:0')

run_and_plot(sess, in_tensor, X, ou_tensor)`

I would appreciate if someone knows what the problem is. Thanks in advance

@helloeve
Copy link
Owner

helloeve commented Nov 2, 2021

Hi @germanebr , as you can see from the commit history, this code was written for a quite older version of tensorflow + tensor hub. At that time the parameters within the tensor hub model are not retrainable so I came up with this work around. For latest version of tensorflow, I believe you won't really need to convert the model at all. You should be able to just load the model with the trainable parameter equal to True and then follow the same fine-tuning strategy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants