Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some error messages #2

Open
flatsiedatsie opened this issue Mar 9, 2024 · 0 comments
Open

Some error messages #2

flatsiedatsie opened this issue Mar 9, 2024 · 0 comments

Comments

@flatsiedatsie
Copy link
Contributor

While autoguff worked for the first model I tried, it hasn't worked for all the the ones I tried after.

This is probably due to the models I'm trying to convert more than anything, but just to be sure I though I'd share:

Traceback (most recent call last):
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 1466, in <module>
    main()
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 1402, in main
    model_plus = load_some_model(args.model)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 1278, in load_some_model
    models_plus.append(lazy_load_file(path))
                       ^^^^^^^^^^^^^^^^^^^^
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 887, in lazy_load_file
    return lazy_load_torch_file(fp, path)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 843, in lazy_load_torch_file
    model = unpickler.load()
            ^^^^^^^^^^^^^^^^
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 832, in find_class
    return self.CLASSES[(module, name)]
           ~~~~~~~~~~~~^^^^^^^^^^^^^^^^
KeyError: ('torch', 'ByteStorage')
Error: failed with return code 1

and

Traceback (most recent call last):
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 1466, in <module>
    main()
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 1402, in main
    model_plus = load_some_model(args.model)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 1278, in load_some_model
    models_plus.append(lazy_load_file(path))
                       ^^^^^^^^^^^^^^^^^^^^
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 890, in lazy_load_file
    return lazy_load_safetensors_file(fp, path)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 869, in lazy_load_safetensors_file
    model = {name: convert(info) for (name, info) in header.items() if name != '__metadata__'}
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 869, in <dictcomp>
    model = {name: convert(info) for (name, info) in header.items() if name != '__metadata__'}
                   ^^^^^^^^^^^^^
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 857, in convert
    data_type = SAFETENSORS_DATA_TYPES[info['dtype']]
                ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
KeyError: 'U8'
Error: failed with return code 1

and

Traceback (most recent call last):████████████████████████████████████████████████████████████| 551M/551M [02:04<00:00, 8.42MB/s]
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 1466, in <module>
    main()
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 1402, in main
    model_plus = load_some_model(args.model)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/elonmusk/code/llama.cpp/convert.py", line 1271, in load_some_model
    raise Exception(f"Found multiple models in {path}, not sure which to pick: {files}")
Exception: Found multiple models in awesome_recipes_exp, not sure which to pick: [PosixPath('awesome_recipes_exp/optimizer.pt'), PosixPath('awesome_recipes_exp/scheduler.pt'), PosixPath('awesome_recipes_exp/pytorch_model.bin')]
Error: failed with return code 1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant