Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you force the framework to use CPU? #307

Open
Leli1024 opened this issue Jan 12, 2023 · 0 comments
Open

Can you force the framework to use CPU? #307

Leli1024 opened this issue Jan 12, 2023 · 0 comments

Comments

@Leli1024
Copy link

I'm currently running a project on my Macbook M1, unfortunately some parts of the tensorflow framework don't seem fully compatible with my accelerator. Is there any way to force CPU use for now? Training isn't done on my device anyway, I just need it for inference

Traceback (most recent call last):
  File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1378, in _do_call
    return fn(*args)
  File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1361, in _run_fn
    return self._call_tf_sessionrun(options, feed_dict, fetch_list,
  File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1454, in _call_tf_sessionrun
    return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
tensorflow.python.framework.errors_impl.NotFoundError: No registered 'BroadcastTo' OpKernel for 'GPU' devices compatible with node {{function_node sample_sequence_while_body_5649}}{{node model/add}}
         (OpKernel was found, but attributes didn't match) Requested Attributes: T=DT_INT32, Tidx=DT_INT32, _XlaHasReferenceVars=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"
        .  Registered:  device='XLA_CPU_JIT'; Tidx in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, 16005131165644881776, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64]
  device='GPU'; T in [DT_FLOAT]
  device='CPU'; T in [DT_UINT64]
  device='CPU'; T in [DT_INT64]
  device='CPU'; T in [DT_UINT32]
  device='CPU'; T in [DT_UINT16]
  device='CPU'; T in [DT_INT16]
  device='CPU'; T in [DT_UINT8]
  device='CPU'; T in [DT_INT8]
  device='CPU'; T in [DT_INT32]
  device='CPU'; T in [DT_HALF]
  device='CPU'; T in [DT_BFLOAT16]
  device='CPU'; T in [DT_FLOAT]
  device='CPU'; T in [DT_DOUBLE]
  device='CPU'; T in [DT_COMPLEX64]
  device='CPU'; T in [DT_COMPLEX128]
  device='CPU'; T in [DT_BOOL]
  device='CPU'; T in [DT_STRING]
  device='CPU'; T in [DT_RESOURCE]
  device='CPU'; T in [DT_VARIANT]

         [[sample_sequence/while/body/_1/model/add]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/andrewattard/Downloads/GPT2/High Achiever/test.py", line 4, in <module>
    print(gpt2.generate(sess, truncate='<|endoftext|>'))
  File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/gpt_2_simple/gpt_2.py", line 487, in generate
    out = sess.run(output)
  File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 968, in run
    result = self._run(None, fetches, feed_dict, options_ptr,
  File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1191, in _run
    results = self._do_run(handle, final_targets, final_fetches,
  File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1371, in _do_run
    return self._do_call(_run_fn, feeds, fetches, targets, options,
  File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1397, in _do_call
    raise type(e)(node_def, op, message)  # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.NotFoundError: Graph execution error:

Detected at node 'model/add' defined at (most recent call last):
    File "/Users/andrewattard/Downloads/GPT2/High Achiever/test.py", line 3, in <module>
      gpt2.load_gpt2(sess, run_name='run1', multi_gpu=False)
    File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/gpt_2_simple/gpt_2.py", line 404, in load_gpt2
      output = model.model(hparams=hparams, X=context, gpus=gpus, reuse=reuse)
    File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/gpt_2_simple/src/model.py", line 193, in model
      h = tf.gather(wte, X) + tf.gather(wpe, positions_for(X, past_length))
    File "/Users/andrewattard/miniforge3/envs/thesis/lib/python3.8/site-packages/gpt_2_simple/src/model.py", line 174, in positions_for
      return expand_tile(past_length + tf.range(nsteps), batch_size)
Node: 'model/add'
No registered 'BroadcastTo' OpKernel for 'GPU' devices compatible with node {{node model/add}}
         (OpKernel was found, but attributes didn't match) Requested Attributes: T=DT_INT32, Tidx=DT_INT32, _XlaHasReferenceVars=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"
        .  Registered:  device='XLA_CPU_JIT'; Tidx in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, 16005131165644881776, DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64]
  device='GPU'; T in [DT_FLOAT]
  device='CPU'; T in [DT_UINT64]
  device='CPU'; T in [DT_INT64]
  device='CPU'; T in [DT_UINT32]
  device='CPU'; T in [DT_UINT16]
  device='CPU'; T in [DT_INT16]
  device='CPU'; T in [DT_UINT8]
  device='CPU'; T in [DT_INT8]
  device='CPU'; T in [DT_INT32]
  device='CPU'; T in [DT_HALF]
  device='CPU'; T in [DT_BFLOAT16]
  device='CPU'; T in [DT_FLOAT]
  device='CPU'; T in [DT_DOUBLE]
  device='CPU'; T in [DT_COMPLEX64]
  device='CPU'; T in [DT_COMPLEX128]
  device='CPU'; T in [DT_BOOL]
  device='CPU'; T in [DT_STRING]
  device='CPU'; T in [DT_RESOURCE]
  device='CPU'; T in [DT_VARIANT]

         [[sample_sequence/while/body/_1/model/add]]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant