You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently running a project on my Macbook M1, unfortunately some parts of the tensorflow framework don't seem fully compatible with my accelerator. Is there any way to force CPU use for now? Training isn't done on my device anyway, I just need it for inference
Traceback (most recent call last):
File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1378, in _do_call
return fn(*args)
File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1361, in _run_fn
return self._call_tf_sessionrun(options, feed_dict, fetch_list,
File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1454, in _call_tf_sessionrun
return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
tensorflow.python.framework.errors_impl.NotFoundError: No registered 'BroadcastTo' OpKernel for 'GPU' devices compatible with node {{function_node sample_sequence_while_body_5649}}{{node model/add}}
(OpKernel was found, but attributes didn't match) Requested Attributes: T=DT_INT32, Tidx=DT_INT32, _XlaHasReferenceVars=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"
. Registered: device='XLA_CPU_JIT'; Tidx in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, 16005131165644881776, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64]
device='GPU'; T in [DT_FLOAT]
device='CPU'; T in [DT_UINT64]
device='CPU'; T in [DT_INT64]
device='CPU'; T in [DT_UINT32]
device='CPU'; T in [DT_UINT16]
device='CPU'; T in [DT_INT16]
device='CPU'; T in [DT_UINT8]
device='CPU'; T in [DT_INT8]
device='CPU'; T in [DT_INT32]
device='CPU'; T in [DT_HALF]
device='CPU'; T in [DT_BFLOAT16]
device='CPU'; T in [DT_FLOAT]
device='CPU'; T in [DT_DOUBLE]
device='CPU'; T in [DT_COMPLEX64]
device='CPU'; T in [DT_COMPLEX128]
device='CPU'; T in [DT_BOOL]
device='CPU'; T in [DT_STRING]
device='CPU'; T in [DT_RESOURCE]
device='CPU'; T in [DT_VARIANT]
[[sample_sequence/while/body/_1/model/add]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/andrewattard/Downloads/GPT2/High Achiever/test.py", line 4, in <module>
print(gpt2.generate(sess, truncate='<|endoftext|>'))
File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/gpt_2_simple/gpt_2.py", line 487, in generate
out = sess.run(output)
File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 968, in run
result = self._run(None, fetches, feed_dict, options_ptr,
File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1191, in _run
results = self._do_run(handle, final_targets, final_fetches,
File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1371, in _do_run
return self._do_call(_run_fn, feeds, fetches, targets, options,
File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1397, in _do_call
raise type(e)(node_def, op, message) # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.NotFoundError: Graph execution error:
Detected at node 'model/add' defined at (most recent call last):
File "/Users/andrewattard/Downloads/GPT2/High Achiever/test.py", line 3, in <module>
gpt2.load_gpt2(sess, run_name='run1', multi_gpu=False)
File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/gpt_2_simple/gpt_2.py", line 404, in load_gpt2
output = model.model(hparams=hparams, X=context, gpus=gpus, reuse=reuse)
File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/gpt_2_simple/src/model.py", line 193, in model
h = tf.gather(wte, X) + tf.gather(wpe, positions_for(X, past_length))
File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/gpt_2_simple/src/model.py", line 174, in positions_for
return expand_tile(past_length + tf.range(nsteps), batch_size)
Node: 'model/add'
No registered 'BroadcastTo' OpKernel for 'GPU' devices compatible with node {{node model/add}}
(OpKernel was found, but attributes didn't match) Requested Attributes: T=DT_INT32, Tidx=DT_INT32, _XlaHasReferenceVars=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"
. Registered: device='XLA_CPU_JIT'; Tidx in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, 16005131165644881776, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64]
device='GPU'; T in [DT_FLOAT]
device='CPU'; T in [DT_UINT64]
device='CPU'; T in [DT_INT64]
device='CPU'; T in [DT_UINT32]
device='CPU'; T in [DT_UINT16]
device='CPU'; T in [DT_INT16]
device='CPU'; T in [DT_UINT8]
device='CPU'; T in [DT_INT8]
device='CPU'; T in [DT_INT32]
device='CPU'; T in [DT_HALF]
device='CPU'; T in [DT_BFLOAT16]
device='CPU'; T in [DT_FLOAT]
device='CPU'; T in [DT_DOUBLE]
device='CPU'; T in [DT_COMPLEX64]
device='CPU'; T in [DT_COMPLEX128]
device='CPU'; T in [DT_BOOL]
device='CPU'; T in [DT_STRING]
device='CPU'; T in [DT_RESOURCE]
device='CPU'; T in [DT_VARIANT]
[[sample_sequence/while/body/_1/model/add]]
The text was updated successfully, but these errors were encountered:
I'm currently running a project on my Macbook M1, unfortunately some parts of the tensorflow framework don't seem fully compatible with my accelerator. Is there any way to force CPU use for now? Training isn't done on my device anyway, I just need it for inference
The text was updated successfully, but these errors were encountered: