You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've seen a few cases with sentence transformers where teams have used slightly bespoke structures (wrappers around a single Transformer at the heart of it all) -- this may get even more prevalent with Release 3.1.0 . Tweaking this line to "something along the lines of" fm = [x for x in self.modules() if hasattr(x, "auto_model")][0] would work in cases where there is a single Transformer. (Also, you may have to turn off the warmup (--no-model-warmup) depending on whether or not your model expects a particular form of input)
Model description
I have a custom SentenceTransformer model that is a custom class (And also quite nested), so on the top level the modules.json file look like
This loads correctly if I use SentenceTransformers directly, but when loading in infinity it complains that the
.auto_model
attribute is missing (thrown by these lines https://github.com/michaelfeil/infinity/blob/main/libs/infinity_emb/infinity_emb/transformer/embedder/sentence_transformer.py#L81-L93 ). If this sort of custom model can be supported, or if you can give me some guidance on the correct way to save the model, that would be great.Open source status & huggingface transformers.
pip install infinity_emb[all] --upgrade
The text was updated successfully, but these errors were encountered: