You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are some same problems when saving quanted model , coding is here:
if args.save_model:
model.save_pretrained(args.save_model)
tokenizer.save_pretrained(args.save_model)
Could you please provide the right method to save the quanted model?
The text was updated successfully, but these errors were encountered:
After saving model using the above code, it occurs some problems like "ValueError: You are trying to save a non contiguous tensor".
Then, I correct it by using the following code " if args.save_model:
for name, param in model.named_parameters():
if not param.is_contiguous():
param.data = param.data.contiguous()
model.save_pretrained(args.save_model)
tokenizer.save_pretrained(args.save_model)", which can save the model. However, if I use "transformers.
AutoModelForCausalLM.from_pretrained("saved model ")" to load the model, it occurs another problem:"ValueError: Trying to set a tensor of shape torch.Size([1]) in "weight" (which has shape torch.Size([8192])), this looks incorrect.".
Could you please tell me how to save the quanted model like OmniQuant, and then use the quanted model for evaluation by lm_eval.
There are some same problems when saving quanted model , coding is here:
if args.save_model:
model.save_pretrained(args.save_model)
tokenizer.save_pretrained(args.save_model)
Could you please provide the right method to save the quanted model?
The text was updated successfully, but these errors were encountered: