Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exporting CLIP Model from HF Hub #166

Closed
vimal-quilt opened this issue Dec 4, 2021 · 3 comments
Closed

Exporting CLIP Model from HF Hub #166

vimal-quilt opened this issue Dec 4, 2021 · 3 comments

Comments

@vimal-quilt
Copy link

when i try to export the hugging face model clip, i facing the issue of set pixel values how do i resolve that?

embeddings = onnx("openai/clip-vit-base-patch32", "clip-embeddings.onnx", quantize=True)

/usr/local/lib/python3.7/dist-packages/transformers/models/clip/modeling_clip.py in forward(self, pixel_values, output_attentions, output_hidden_states, return_dict)
760
761 if pixel_values is None:
--> 762 raise ValueError("You have to specify pixel_values")
763
764 hidden_states = self.embeddings(pixel_values)

ValueError: You have to specify pixel_values

@davidmezzetti
Copy link
Member

Thank you for trying txtai and writing up the issue!

The ONNX pipeline currently supports text classification, question-answering and mean pooling models. There is development work that needs to be done to support encoder-decoder models like summarization, translation and more complex models like CLIP.

@davidmezzetti
Copy link
Member

If there are further questions, please feel free to reopen or open a new issue.

@nickchomey
Copy link

FYI, ViT support has been added to HF export, along with seq2seq and other stuff.

#371 is the new issue to track adding these mechanisms to txtai.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants