Questions about using HuggingFace safetensors models and Colab scripts #6549
Unanswered
jasonsu123
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone,
Recently, I’ve been trying to run LLMs locally and came across the very useful tool text-generation-webui.
I have a few questions:
If a HuggingFace author only provides a model in the safetensors format (as shown below), how can I configure text-generation-webui to run such safetensors models locally? Link to the model:
https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1/tree/main
The text-generation-webui tool offers a Google Colab script that is simple and clear. However, I’m not entirely sure about the meaning of certain parameters. For example:
This appears to require the HuggingFace model URL, but is there a specific type of URL I need to provide? Should it point to the model card page or the "Files & Versions" page?
Branches and command_line: What are their purposes?
Most importantly, after running this Google Colab script, can it still be considered as running an LLM locally?
Is there any risk of exposing my input data to the internet?
Here’s the link to the Colab script I’m referring to:
https://colab.research.google.com/github/oobabooga/text-generation-webui/blob/main/Colab-TextGen-GPU.ipynb
Thank you for your help!
Beta Was this translation helpful? Give feedback.
All reactions