-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EPIC] Model support dashboard (v2) #1126
Comments
|
I am thinking here, Should we need to have a outline likes below:
If we keep this in mind, we can split the model to the specific backend. For example, I saw the Qwen LLM mentioned their C++ implementation of it "working in the same way as llama.cpp". So, maybe it can be loaded by using llama.cpp. It should be compatible with C++ our backend. I also suggest to add some labels are related to the series of the backends. We can know a new model can be compatible by our backends. |
conda branch was merged in #1144 . I'm looking now into make the llama.cpp backend on par with llama-go and also add llava support to it. I'm going to refactor and re-layout things in the new |
@mudler thank you for mentioning. Here are some questions(second, and third one) may need you help #1180 (comment). Here what I am thinking is that we can use a tiny model to test the Rust backend features. And make sure everything ok. Maybe we can merge it. And if everything ok. We can add other LLMs. I have plan to support Llama2(60% finished but it still has an issue), whisper, and also support |
Breaking re-layout PR: #1279 |
caching/preloading of transformer and similar models, these are currently automatically loaded on startup into |
I may start looking into #1273 while this progresses. What do you think ? |
please feel free to go ahead, there are many pieces involved in here, any help is more than appreciated 👍 |
in #1746 I'm taking care of automatically binding the HF cache variables to the |
This epic is a major tracker for all the backends additions that should be part of LocalAI v2 and ongoing efforts.
The objective is to release a v2 which deprecates old models which are now superseded, plus adding a new set. In order to achieve this my idea is to clean up the current state and start pinning dependencies for all the backends which requires specific environment settings (python-based ones).
Some key points:
Some backends will be deprecated as superseded, as such some repositories will be archived (TBD yet).
Backends:
After conda
We need to still test on master:
Some rough first-steps required :
Re-layout
extra
insidebackends
#1264After the re-layout we can add new backends listed above without any clashes and:
llama
in favor ofllama-cpp
The text was updated successfully, but these errors were encountered: