The following model_kwargs
are not used by the model: ['tokenizer']
#1237
Labels
model_kwargs
are not used by the model: ['tokenizer']
#1237
Describe the issue as clearly as possible:
During the generate process, the tokenizer is being sent to the transformers model.generate() function as a kwarg, which is then being caught during their validation of model_kwargs. Why is tokenizer being added to this call? Is there a specific version of transformers required for this?
It's being add in
_get_generation_kwargs
at the end of the function:
return dict( logits_processor=logits_processor_list, generation_config=generation_config, tokenizer=self.tokenizer.tokenizer, )
What am I missing here?
Steps/code to reproduce the bug:
Expected result:
Error message:
Outlines/Python version information:
Version information
Context for the issue:
No response
The text was updated successfully, but these errors were encountered: