You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
My own task or dataset (give details below)
Reproduction
Description:
This is a bug report regarding the "Sharing custom models" feature.
Steps to reproduce:
Following the documentation, I registered my custom architecture model to AutoClass using the following code and pushed it to the Hugging Face Hub. I confirmed that modeling_bit_llama.py exists in the Hub repository.
from mybitnet import BitLlamaConfig, BitLlamaForCausalLM
BitLlamaConfig.register_for_auto_class()
BitLlamaForCausalLM.register_for_auto_class("AutoModelForCausalLM")
trainer.push_to_hub()
Then, I tried to load the model using AutoClass with the following code:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "HachiML/myBit-Llama2-jp-127M-test-17"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
print(model)
In version 4.38.x, no error occurred. However, in version 4.39.1, I encountered the following error:
ValueError: The model class you are passing has a `config_class` attribute that is not consistent with the config class you passed (model has <class 'transformers_modules.HachiML.myBit-Llama2-jp-127M-test-17.91a53eeaa608293edf70e1734a05e8ebaccd3233.modeling_bit_llama.BitLlamaConfig'> and you passed <class 'transformers_modules.HachiML.myBit-Llama2-jp-127M-test-17.91a53eeaa608293edf70e1734a05e8ebaccd3233.modeling_bit_llama.BitLlamaConfig'>. Fix one of those so they match!
Expected behavior
Expected behavior:
The model should be successfully loaded using AutoClass in version 4.39.1, as it did in version 4.38.x.
Actual behavior:
In version 4.39.1, an error is raised when attempting to load the model using AutoClass, even though the same code worked in version 4.38.x.
Please let me know if you need any additional information or clarification. Thank you for your attention to this issue.
The text was updated successfully, but these errors were encountered:
@Rocketknight1
Hi, I was mentioned that this issue might be related to the loading of remote repos with . in their names. Could you please take a look at this issue and provide any insights or suggestions on how to resolve it? Thank you for your help!
Filed a PR to fix this at #29854. @Hajime-Y can you test to confirm it fixes your problem? You can install the PR branch with pip install --upgrade git+https://github.com/huggingface/transformers.git@update_config_class_check
System Info
Environment(
transformers-cli env
result):transformers
version: 4.39.1Execution Environment: Google Colab
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Description:
This is a bug report regarding the "Sharing custom models" feature.
Steps to reproduce:
Expected behavior
Expected behavior:
The model should be successfully loaded using AutoClass in version 4.39.1, as it did in version 4.38.x.
Actual behavior:
In version 4.39.1, an error is raised when attempting to load the model using AutoClass, even though the same code worked in version 4.38.x.
Please let me know if you need any additional information or clarification. Thank you for your attention to this issue.
The text was updated successfully, but these errors were encountered: