Run with model running on colab #7
briancunning
started this conversation in
General
Replies: 1 comment
-
Thanks for sharing In my opinion, the slow part is the LLM. And I don't know if Colab will help with that. You need an Ollama container hosted on the cloud or a HuggingFace Inference API for the model. If you can get the model working with Colab, please share the knowhow. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm trying to get this to work with a model running on a colab as running models locally on my laptop is incredibly slow.
https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/llama-2-7b-chat.ipynb
The above colab would need to be modified to create a local tunnel to expose the URL to model.
Something along the lines of
Beta Was this translation helpful? Give feedback.
All reactions