You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for trying txtai and writing up the issue!
The ONNX pipeline currently supports text classification, question-answering and mean pooling models. There is development work that needs to be done to support encoder-decoder models like summarization, translation and more complex models like CLIP.
when i try to export the hugging face model clip, i facing the issue of set pixel values how do i resolve that?
embeddings = onnx("openai/clip-vit-base-patch32", "clip-embeddings.onnx", quantize=True)
/usr/local/lib/python3.7/dist-packages/transformers/models/clip/modeling_clip.py in forward(self, pixel_values, output_attentions, output_hidden_states, return_dict)
760
761 if pixel_values is None:
--> 762 raise ValueError("You have to specify pixel_values")
763
764 hidden_states = self.embeddings(pixel_values)
ValueError: You have to specify pixel_values
The text was updated successfully, but these errors were encountered: