You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should not need sentence transformers for calculating embeddings if we are using ollama. No reason to have a torch dependency. Specifically, we need to update templates/ollama/run.yaml to point the embedding model to the ollama inference provider.
💡 Why is this needed? What if we don't build it?
Dependency on the transformers package is a two-edged sword due to its complexity.
Other thoughts
Need to ensure that client-sdk tests pass (i.e., the standard all-MiniLM alias we use everywhere "just works" -- if there is a more standard HuggingFace ID for that model, maybe we should use that.)
The text was updated successfully, but these errors were encountered:
🚀 Describe the new functionality needed
We should not need sentence transformers for calculating embeddings if we are using ollama. No reason to have a torch dependency. Specifically, we need to update
templates/ollama/run.yaml
to point the embedding model to the ollama inference provider.💡 Why is this needed? What if we don't build it?
Dependency on the transformers package is a two-edged sword due to its complexity.
Other thoughts
Need to ensure that client-sdk tests pass (i.e., the standard all-MiniLM alias we use everywhere "just works" -- if there is a more standard HuggingFace ID for that model, maybe we should use that.)
The text was updated successfully, but these errors were encountered: