Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deleting quantization_config broken #35223

Closed
psinger opened this issue Dec 12, 2024 · 6 comments
Closed

Deleting quantization_config broken #35223

psinger opened this issue Dec 12, 2024 · 6 comments
Labels

Comments

@psinger
Copy link

psinger commented Dec 12, 2024

System Info

Version 4.47.0

Reproduction

from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("h2oai/h2o-danube3-500m-chat", load_in_4bit=True)
del model.config.quantization_config
model.config

TypeError: Object of type dtype is not JSON serializable

Expected behavior

I am unable to delete the quantization_config from an existing model. Whenever I do it, it just completely breaks the whole config.

I also tried setting is_quantized=False but it does not change anything.

Is there another way of achieving this?

I am aware that there is a .dequantize function, but in this case Im changing dtypes on my own and want to exclude that quantization_config particularly when saving the model.

@psinger psinger added the bug label Dec 12, 2024
@Rocketknight1
Copy link
Member

I think this might be a question for the forums/Discord, but pinging @SunMarc @MekkCyber just in case

@MekkCyber
Copy link
Contributor

Hey @psinger, what's the reason you want to delete the quantization_config after the model is loaded ?

@psinger
Copy link
Author

psinger commented Dec 18, 2024

Because I am manually dequantizing and am not relying on the HF functionality for it. And then before pushing to hub, I want to remove the quantization_config from the config.

@MekkCyber
Copy link
Contributor

Thanks for the clarification @psinger, I think you can make it work as follows :

from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "h2oai/h2o-danube3-500m-chat", load_in_4bit=True
)

config_dict = model.config.to_dict()
if "quantization_config" in config_dict:
    del config_dict["quantization_config"]

from transformers import PretrainedConfig

model.config = PretrainedConfig.from_dict(config_dict)
print(model.config)

@SunMarc
Copy link
Member

SunMarc commented Dec 23, 2024

Does it fix the issue @psinger ?

Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@SunMarc SunMarc closed this as completed Jan 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants