Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference returning the same empty answer to everything i type #182

Open
Allan1901 opened this issue Oct 23, 2021 · 1 comment
Open

Inference returning the same empty answer to everything i type #182

Allan1901 opened this issue Oct 23, 2021 · 1 comment

Comments

@Allan1901
Copy link

Allan1901 commented Oct 23, 2021

Anything i type return the answer - [6.5] . It was trained only once (prepare_data.py and train.py), with a conversation of 1200 lines. I'm using python 3.7 and tensorflow 1.14.0 in a cpu.

python3 inference.py
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])


Starting interactive mode (first response will take a while):

> voltei
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

WARNING:tensorflow:From inference.py:57: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/utils/misc_utils.py:97: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/nmt.py:549: The name tf.gfile.MakeDirs is deprecated. Please use tf.io.gfile.makedirs instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model_helper.py:202: The name tf.container is deprecated. Please use tf.compat.v1.container instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model_helper.py:208: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/utils/iterator_utils.py:87: DatasetV1.make_initializable_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_initializable_iterator(dataset)`.
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:162: The name tf.get_variable_scope is deprecated. Please use tf.compat.v1.get_variable_scope instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model_helper.py:358: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model_helper.py:285: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:375: The name tf.layers.Dense is deprecated. Please use tf.compat.v1.layers.Dense instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model_helper.py:402: BasicLSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:843: bidirectional_dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `keras.layers.Bidirectional(keras.layers.RNN(cell))`, which is equivalent to this API
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/ops/rnn.py:464: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `keras.layers.RNN(cell)`, which is equivalent to this API
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/ops/rnn_cell_impl.py:738: calling Zeros.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:Entity <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8de1435a90>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8de1435a90>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/ops/rnn.py:244: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:Entity <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8dddf16750>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8dddf16750>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:445: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:445: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f8ddded0150>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f8ddded0150>>: AttributeError: module 'gast' has no attribute 'Index'
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model_helper.py:508: MultiRNNCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.StackedRNNCells, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:Entity <bound method AttentionWrapper.call of <tensorflow.contrib.seq2seq.python.ops.attention_wrapper.AttentionWrapper object at 0x7f8dddbc0a50>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method AttentionWrapper.call of <tensorflow.contrib.seq2seq.python.ops.attention_wrapper.AttentionWrapper object at 0x7f8dddbc0a50>>: AttributeError: module 'gast' has no attribute 'Index'
WARNING:tensorflow:Entity <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x7f8dddbc0590>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x7f8dddbc0590>>: AttributeError: module 'gast' has no attribute 'Index'
WARNING:tensorflow:Entity <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8dddd103d0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8dddd103d0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8dddd68e50>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8dddd68e50>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f8dddc17b90>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f8dddc17b90>>: AttributeError: module 'gast' has no attribute 'Index'
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f8dde045a90>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f8dde045a90>>: AttributeError: module 'gast' has no attribute 'Index'
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:183: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/attention_model.py:193: The name tf.summary.image is deprecated. Please use tf.compat.v1.summary.image instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:100: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

2021-10-23 16:08:00.591094: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
2021-10-23 16:08:00.619936: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2095100000 Hz
2021-10-23 16:08:00.620778: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55a151818ba0 executing computations on platform Host. Devices:
2021-10-23 16:08:00.620865: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
2021-10-23 16:08:00.663323: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Switch: CPU XLA_CPU 
Assign: CPU 
Identity: CPU XLA_CPU 
VariableV2: CPU 
Enter: CPU XLA_CPU 
GatherV2: CPU XLA_CPU 
RandomUniform: CPU XLA_CPU 
Const: CPU XLA_CPU 
Mul: CPU XLA_CPU 
Add: CPU XLA_CPU 
Sub: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  embeddings/embedding_share/Initializer/random_uniform/shape (Const) 
  embeddings/embedding_share/Initializer/random_uniform/min (Const) 
  embeddings/embedding_share/Initializer/random_uniform/max (Const) 
  embeddings/embedding_share/Initializer/random_uniform/RandomUniform (RandomUniform) 
  embeddings/embedding_share/Initializer/random_uniform/sub (Sub) 
  embeddings/embedding_share/Initializer/random_uniform/mul (Mul) 
  embeddings/embedding_share/Initializer/random_uniform (Add) 
  embeddings/embedding_share (VariableV2) /device:GPU:0
  embeddings/embedding_share/Assign (Assign) /device:GPU:0
  embeddings/embedding_share/read (Identity) /device:GPU:0
  dynamic_seq2seq/encoder/embedding_lookup/axis (Const) /device:GPU:0
  dynamic_seq2seq/encoder/embedding_lookup (GatherV2) /device:GPU:0
  dynamic_seq2seq/decoder/embedding_lookup/axis (Const) /device:GPU:0
  dynamic_seq2seq/decoder/embedding_lookup (GatherV2) /device:GPU:0
  dynamic_seq2seq/decoder/decoder/while/BasicDecoderStep/cond/embedding_lookup/axis (Const) /device:GPU:0
  dynamic_seq2seq/decoder/decoder/while/BasicDecoderStep/cond/embedding_lookup/Enter (Enter) /device:GPU:0
  dynamic_seq2seq/decoder/decoder/while/BasicDecoderStep/cond/embedding_lookup/Switch (Switch) /device:GPU:0
  dynamic_seq2seq/decoder/decoder/while/BasicDecoderStep/cond/embedding_lookup (GatherV2) /device:GPU:0
  save/Assign_13 (Assign) /device:GPU:0

2021-10-23 16:08:00.663720: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Assign: CPU 
VariableV2: CPU 
RandomUniform: CPU XLA_CPU 
Const: CPU XLA_CPU 
Mul: CPU XLA_CPU 
Add: CPU XLA_CPU 
Sub: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/shape (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/min (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/max (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/sub (Sub) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/mul (Mul) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform (Add) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel (VariableV2) /device:GPU:0
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Assign (Assign) /device:GPU:0
  save/Assign_12 (Assign) /device:GPU:0

2021-10-23 16:08:00.663871: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
VariableV2: CPU 
Const: CPU XLA_CPU 
Assign: CPU 
Fill: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/bias/Initializer/zeros/shape_as_tensor (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/bias/Initializer/zeros/Const (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/bias/Initializer/zeros (Fill) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/bias (VariableV2) /device:GPU:0
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/bias/Assign (Assign) /device:GPU:0
  save/Assign_11 (Assign) /device:GPU:0

2021-10-23 16:08:00.664300: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Assign: CPU 
VariableV2: CPU 
RandomUniform: CPU XLA_CPU 
Const: CPU XLA_CPU 
Mul: CPU XLA_CPU 
Add: CPU XLA_CPU 
Sub: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/shape (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/min (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/max (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/sub (Sub) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/mul (Mul) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform (Add) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel (VariableV2) /device:GPU:0
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Assign (Assign) /device:GPU:0
  save/Assign_10 (Assign) /device:GPU:0

2021-10-23 16:08:00.664439: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
VariableV2: CPU 
Const: CPU XLA_CPU 
Assign: CPU 
Fill: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/bias/Initializer/zeros/shape_as_tensor (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/bias/Initializer/zeros/Const (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/bias/Initializer/zeros (Fill) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/bias (VariableV2) /device:GPU:0
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/bias/Assign (Assign) /device:GPU:0
  save/Assign_9 (Assign) /device:GPU:0

2021-10-23 16:08:00.665402: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Assign: CPU 
VariableV2: CPU 
RandomUniform: CPU XLA_CPU 
Const: CPU XLA_CPU 
Mul: CPU XLA_CPU 
Add: CPU XLA_CPU 
Sub: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/shape (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/min (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/max (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/sub (Sub) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/mul (Mul) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform (Add) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel (VariableV2) /device:GPU:0
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Assign (Assign) /device:GPU:0
  save/Assign_4 (Assign) /device:GPU:0

2021-10-23 16:08:00.665522: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
VariableV2: CPU 
Const: CPU XLA_CPU 
Assign: CPU 
Fill: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/bias/Initializer/zeros/shape_as_tensor (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/bias/Initializer/zeros/Const (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/bias/Initializer/zeros (Fill) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/bias (VariableV2) /device:GPU:0
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/bias/Assign (Assign) /device:GPU:0
  save/Assign_3 (Assign) /device:GPU:0

2021-10-23 16:08:00.665724: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Assign: CPU 
VariableV2: CPU 
RandomUniform: CPU XLA_CPU 
Const: CPU XLA_CPU 
Mul: CPU XLA_CPU 
Add: CPU XLA_CPU 
Sub: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform/shape (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform/min (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform/max (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform/sub (Sub) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform/mul (Mul) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform (Add) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel (VariableV2) /device:GPU:0
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Assign (Assign) /device:GPU:0
  save/Assign_6 (Assign) /device:GPU:0

2021-10-23 16:08:00.665858: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
VariableV2: CPU 
Const: CPU XLA_CPU 
Assign: CPU 
Fill: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/bias/Initializer/zeros/shape_as_tensor (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/bias/Initializer/zeros/Const (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/bias/Initializer/zeros (Fill) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/bias (VariableV2) /device:GPU:0
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/bias/Assign (Assign) /device:GPU:0
  save/Assign_5 (Assign) /device:GPU:0

2021-10-23 16:08:00.666057: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Assign: CPU 
Const: CPU XLA_CPU 
VariableV2: CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/decoder/attention/luong_attention/attention_g/Initializer/ones (Const) 
  dynamic_seq2seq/decoder/attention/luong_attention/attention_g (VariableV2) /device:GPU:0
  dynamic_seq2seq/decoder/attention/luong_attention/attention_g/Assign (Assign) /device:GPU:0
  save/Assign_2 (Assign) /device:GPU:0

2021-10-23 16:08:00.666289: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Assign: CPU 
VariableV2: CPU 
RandomUniform: CPU XLA_CPU 
Const: CPU XLA_CPU 
Mul: CPU XLA_CPU 
Add: CPU XLA_CPU 
Sub: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform/shape (Const) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform/min (Const) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform/max (Const) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform/sub (Sub) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform/mul (Mul) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform (Add) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel (VariableV2) /device:GPU:0
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Assign (Assign) /device:GPU:0
  save/Assign_1 (Assign) /device:GPU:0

2021-10-23 16:08:00.711406: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
-  [6.5] 

> oi
-  [6.5] 

> olá
-  [6.5] 

> voltei
-  [6.5] 

@aditya543
Copy link

Anything i type return the answer - [6.5] . It was trained only once (prepare_data.py and train.py), with a conversation of 1200 lines. I'm using python 3.7 and tensorflow 1.14.0 in a cpu.

python3 inference.py
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])


Starting interactive mode (first response will take a while):

> voltei
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
  * https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.

WARNING:tensorflow:From inference.py:57: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/utils/misc_utils.py:97: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/nmt.py:549: The name tf.gfile.MakeDirs is deprecated. Please use tf.io.gfile.makedirs instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model_helper.py:202: The name tf.container is deprecated. Please use tf.compat.v1.container instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model_helper.py:208: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/utils/iterator_utils.py:87: DatasetV1.make_initializable_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_initializable_iterator(dataset)`.
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:162: The name tf.get_variable_scope is deprecated. Please use tf.compat.v1.get_variable_scope instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model_helper.py:358: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model_helper.py:285: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:375: The name tf.layers.Dense is deprecated. Please use tf.compat.v1.layers.Dense instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model_helper.py:402: BasicLSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:843: bidirectional_dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `keras.layers.Bidirectional(keras.layers.RNN(cell))`, which is equivalent to this API
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/ops/rnn.py:464: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `keras.layers.RNN(cell)`, which is equivalent to this API
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/ops/rnn_cell_impl.py:738: calling Zeros.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:Entity <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8de1435a90>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8de1435a90>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/ops/rnn.py:244: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:Entity <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8dddf16750>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8dddf16750>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:445: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:445: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f8ddded0150>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f8ddded0150>>: AttributeError: module 'gast' has no attribute 'Index'
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model_helper.py:508: MultiRNNCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.StackedRNNCells, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:Entity <bound method AttentionWrapper.call of <tensorflow.contrib.seq2seq.python.ops.attention_wrapper.AttentionWrapper object at 0x7f8dddbc0a50>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method AttentionWrapper.call of <tensorflow.contrib.seq2seq.python.ops.attention_wrapper.AttentionWrapper object at 0x7f8dddbc0a50>>: AttributeError: module 'gast' has no attribute 'Index'
WARNING:tensorflow:Entity <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x7f8dddbc0590>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x7f8dddbc0590>>: AttributeError: module 'gast' has no attribute 'Index'
WARNING:tensorflow:Entity <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8dddd103d0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8dddd103d0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8dddd68e50>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f8dddd68e50>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f8dddc17b90>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f8dddc17b90>>: AttributeError: module 'gast' has no attribute 'Index'
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f8dde045a90>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f8dde045a90>>: AttributeError: module 'gast' has no attribute 'Index'
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:183: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/attention_model.py:193: The name tf.summary.image is deprecated. Please use tf.compat.v1.summary.image instead.

WARNING:tensorflow:From /media/lubuntu/TOSHIBA/vit/w/paralel universe3/code/projetos sem pycharm/deb/treinando/nmt-chatbot/nmt/nmt/model.py:100: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

2021-10-23 16:08:00.591094: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
2021-10-23 16:08:00.619936: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2095100000 Hz
2021-10-23 16:08:00.620778: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55a151818ba0 executing computations on platform Host. Devices:
2021-10-23 16:08:00.620865: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
WARNING:tensorflow:From /media/lubuntu/TOSHIBA/conda2/envs/telegram3/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
2021-10-23 16:08:00.663323: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Switch: CPU XLA_CPU 
Assign: CPU 
Identity: CPU XLA_CPU 
VariableV2: CPU 
Enter: CPU XLA_CPU 
GatherV2: CPU XLA_CPU 
RandomUniform: CPU XLA_CPU 
Const: CPU XLA_CPU 
Mul: CPU XLA_CPU 
Add: CPU XLA_CPU 
Sub: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  embeddings/embedding_share/Initializer/random_uniform/shape (Const) 
  embeddings/embedding_share/Initializer/random_uniform/min (Const) 
  embeddings/embedding_share/Initializer/random_uniform/max (Const) 
  embeddings/embedding_share/Initializer/random_uniform/RandomUniform (RandomUniform) 
  embeddings/embedding_share/Initializer/random_uniform/sub (Sub) 
  embeddings/embedding_share/Initializer/random_uniform/mul (Mul) 
  embeddings/embedding_share/Initializer/random_uniform (Add) 
  embeddings/embedding_share (VariableV2) /device:GPU:0
  embeddings/embedding_share/Assign (Assign) /device:GPU:0
  embeddings/embedding_share/read (Identity) /device:GPU:0
  dynamic_seq2seq/encoder/embedding_lookup/axis (Const) /device:GPU:0
  dynamic_seq2seq/encoder/embedding_lookup (GatherV2) /device:GPU:0
  dynamic_seq2seq/decoder/embedding_lookup/axis (Const) /device:GPU:0
  dynamic_seq2seq/decoder/embedding_lookup (GatherV2) /device:GPU:0
  dynamic_seq2seq/decoder/decoder/while/BasicDecoderStep/cond/embedding_lookup/axis (Const) /device:GPU:0
  dynamic_seq2seq/decoder/decoder/while/BasicDecoderStep/cond/embedding_lookup/Enter (Enter) /device:GPU:0
  dynamic_seq2seq/decoder/decoder/while/BasicDecoderStep/cond/embedding_lookup/Switch (Switch) /device:GPU:0
  dynamic_seq2seq/decoder/decoder/while/BasicDecoderStep/cond/embedding_lookup (GatherV2) /device:GPU:0
  save/Assign_13 (Assign) /device:GPU:0

2021-10-23 16:08:00.663720: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Assign: CPU 
VariableV2: CPU 
RandomUniform: CPU XLA_CPU 
Const: CPU XLA_CPU 
Mul: CPU XLA_CPU 
Add: CPU XLA_CPU 
Sub: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/shape (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/min (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/max (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/sub (Sub) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/mul (Mul) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform (Add) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel (VariableV2) /device:GPU:0
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/kernel/Assign (Assign) /device:GPU:0
  save/Assign_12 (Assign) /device:GPU:0

2021-10-23 16:08:00.663871: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
VariableV2: CPU 
Const: CPU XLA_CPU 
Assign: CPU 
Fill: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/bias/Initializer/zeros/shape_as_tensor (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/bias/Initializer/zeros/Const (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/bias/Initializer/zeros (Fill) 
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/bias (VariableV2) /device:GPU:0
  dynamic_seq2seq/encoder/bidirectional_rnn/fw/basic_lstm_cell/bias/Assign (Assign) /device:GPU:0
  save/Assign_11 (Assign) /device:GPU:0

2021-10-23 16:08:00.664300: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Assign: CPU 
VariableV2: CPU 
RandomUniform: CPU XLA_CPU 
Const: CPU XLA_CPU 
Mul: CPU XLA_CPU 
Add: CPU XLA_CPU 
Sub: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/shape (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/min (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/max (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/sub (Sub) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/mul (Mul) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform (Add) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel (VariableV2) /device:GPU:0
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/kernel/Assign (Assign) /device:GPU:0
  save/Assign_10 (Assign) /device:GPU:0

2021-10-23 16:08:00.664439: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
VariableV2: CPU 
Const: CPU XLA_CPU 
Assign: CPU 
Fill: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/bias/Initializer/zeros/shape_as_tensor (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/bias/Initializer/zeros/Const (Const) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/bias/Initializer/zeros (Fill) 
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/bias (VariableV2) /device:GPU:0
  dynamic_seq2seq/encoder/bidirectional_rnn/bw/basic_lstm_cell/bias/Assign (Assign) /device:GPU:0
  save/Assign_9 (Assign) /device:GPU:0

2021-10-23 16:08:00.665402: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Assign: CPU 
VariableV2: CPU 
RandomUniform: CPU XLA_CPU 
Const: CPU XLA_CPU 
Mul: CPU XLA_CPU 
Add: CPU XLA_CPU 
Sub: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/shape (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/min (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/max (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/sub (Sub) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/mul (Mul) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform (Add) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel (VariableV2) /device:GPU:0
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Assign (Assign) /device:GPU:0
  save/Assign_4 (Assign) /device:GPU:0

2021-10-23 16:08:00.665522: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
VariableV2: CPU 
Const: CPU XLA_CPU 
Assign: CPU 
Fill: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/bias/Initializer/zeros/shape_as_tensor (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/bias/Initializer/zeros/Const (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/bias/Initializer/zeros (Fill) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/bias (VariableV2) /device:GPU:0
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_0/basic_lstm_cell/bias/Assign (Assign) /device:GPU:0
  save/Assign_3 (Assign) /device:GPU:0

2021-10-23 16:08:00.665724: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Assign: CPU 
VariableV2: CPU 
RandomUniform: CPU XLA_CPU 
Const: CPU XLA_CPU 
Mul: CPU XLA_CPU 
Add: CPU XLA_CPU 
Sub: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform/shape (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform/min (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform/max (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform/sub (Sub) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform/mul (Mul) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Initializer/random_uniform (Add) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel (VariableV2) /device:GPU:0
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/kernel/Assign (Assign) /device:GPU:0
  save/Assign_6 (Assign) /device:GPU:0

2021-10-23 16:08:00.665858: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
VariableV2: CPU 
Const: CPU XLA_CPU 
Assign: CPU 
Fill: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/bias/Initializer/zeros/shape_as_tensor (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/bias/Initializer/zeros/Const (Const) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/bias/Initializer/zeros (Fill) 
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/bias (VariableV2) /device:GPU:0
  dynamic_seq2seq/decoder/attention/multi_rnn_cell/cell_1/basic_lstm_cell/bias/Assign (Assign) /device:GPU:0
  save/Assign_5 (Assign) /device:GPU:0

2021-10-23 16:08:00.666057: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Assign: CPU 
Const: CPU XLA_CPU 
VariableV2: CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/decoder/attention/luong_attention/attention_g/Initializer/ones (Const) 
  dynamic_seq2seq/decoder/attention/luong_attention/attention_g (VariableV2) /device:GPU:0
  dynamic_seq2seq/decoder/attention/luong_attention/attention_g/Assign (Assign) /device:GPU:0
  save/Assign_2 (Assign) /device:GPU:0

2021-10-23 16:08:00.666289: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
  /job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices: 
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
Assign: CPU 
VariableV2: CPU 
RandomUniform: CPU XLA_CPU 
Const: CPU XLA_CPU 
Mul: CPU XLA_CPU 
Add: CPU XLA_CPU 
Sub: CPU XLA_CPU 

Colocation members, user-requested devices, and framework assigned devices, if any:
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform/shape (Const) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform/min (Const) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform/max (Const) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform/sub (Sub) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform/mul (Mul) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Initializer/random_uniform (Add) 
  dynamic_seq2seq/decoder/attention/attention_layer/kernel (VariableV2) /device:GPU:0
  dynamic_seq2seq/decoder/attention/attention_layer/kernel/Assign (Assign) /device:GPU:0
  save/Assign_1 (Assign) /device:GPU:0

2021-10-23 16:08:00.711406: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
-  [6.5] 

> oi
-  [6.5] 

> olá
-  [6.5] 

> voltei
-  [6.5] 

I know this issue. I have no idea what causes it but I know how to fix it just clone the branch 2 of nmt-chatbot the one featured in sentdex's tutorial and start training the model again from scratch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants