You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While autoguff worked for the first model I tried, it hasn't worked for all the the ones I tried after.
This is probably due to the models I'm trying to convert more than anything, but just to be sure I though I'd share:
Traceback (most recent call last):
File "/Users/elonmusk/code/llama.cpp/convert.py", line 1466, in <module>
main()
File "/Users/elonmusk/code/llama.cpp/convert.py", line 1402, in main
model_plus = load_some_model(args.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/elonmusk/code/llama.cpp/convert.py", line 1278, in load_some_model
models_plus.append(lazy_load_file(path))
^^^^^^^^^^^^^^^^^^^^
File "/Users/elonmusk/code/llama.cpp/convert.py", line 887, in lazy_load_file
return lazy_load_torch_file(fp, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/elonmusk/code/llama.cpp/convert.py", line 843, in lazy_load_torch_file
model = unpickler.load()
^^^^^^^^^^^^^^^^
File "/Users/elonmusk/code/llama.cpp/convert.py", line 832, in find_class
return self.CLASSES[(module, name)]
~~~~~~~~~~~~^^^^^^^^^^^^^^^^
KeyError: ('torch', 'ByteStorage')
Error: failed with return code 1
and
Traceback (most recent call last):
File "/Users/elonmusk/code/llama.cpp/convert.py", line 1466, in <module>
main()
File "/Users/elonmusk/code/llama.cpp/convert.py", line 1402, in main
model_plus = load_some_model(args.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/elonmusk/code/llama.cpp/convert.py", line 1278, in load_some_model
models_plus.append(lazy_load_file(path))
^^^^^^^^^^^^^^^^^^^^
File "/Users/elonmusk/code/llama.cpp/convert.py", line 890, in lazy_load_file
return lazy_load_safetensors_file(fp, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/elonmusk/code/llama.cpp/convert.py", line 869, in lazy_load_safetensors_file
model = {name: convert(info) for (name, info) in header.items() if name != '__metadata__'}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/elonmusk/code/llama.cpp/convert.py", line 869, in <dictcomp>
model = {name: convert(info) for (name, info) in header.items() if name != '__metadata__'}
^^^^^^^^^^^^^
File "/Users/elonmusk/code/llama.cpp/convert.py", line 857, in convert
data_type = SAFETENSORS_DATA_TYPES[info['dtype']]
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
KeyError: 'U8'
Error: failed with return code 1
and
Traceback (most recent call last):████████████████████████████████████████████████████████████| 551M/551M [02:04<00:00, 8.42MB/s]
File "/Users/elonmusk/code/llama.cpp/convert.py", line 1466, in <module>
main()
File "/Users/elonmusk/code/llama.cpp/convert.py", line 1402, in main
model_plus = load_some_model(args.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/elonmusk/code/llama.cpp/convert.py", line 1271, in load_some_model
raise Exception(f"Found multiple models in {path}, not sure which to pick: {files}")
Exception: Found multiple models in awesome_recipes_exp, not sure which to pick: [PosixPath('awesome_recipes_exp/optimizer.pt'), PosixPath('awesome_recipes_exp/scheduler.pt'), PosixPath('awesome_recipes_exp/pytorch_model.bin')]
Error: failed with return code 1
The text was updated successfully, but these errors were encountered:
While autoguff worked for the first model I tried, it hasn't worked for all the the ones I tried after.
This is probably due to the models I'm trying to convert more than anything, but just to be sure I though I'd share:
and
and
The text was updated successfully, but these errors were encountered: