-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Mistral's new visual model: Pixtral-12b-240910 #6748
Comments
Gosh I love them. |
Support. |
Hope too ~ |
+1 |
Kudos to the Ollama team ❤️ |
Related to Pixtral, but more generally regarding Multimodal support in Ollama: From my experiments today, Ollama is still supporting multi-modal chat with LLaVa (retried today with v0.3.10). There were indeed some changes in the Llama.cpp server a while back. I was genuinely interested to understand how Ollama can still handle it while Llama.cpp reportedly cannot anymore. Was Ollama relying on Turns out it's not relying on either. Ollama is integrating directly with the llama.cpp code base. As for LLaVa support, they lifted the LLaVa support directly from the Llama.cpp server codebase and have been maintaining that in addition to everything else since then... Ollama team are truly unsung heroes in this technological revolution. |
Is it available at Ollama now? |
ollama pull pixtral Not yet. |
Same for me |
Tried to convert and add this with ollama https://huggingface.co/DewEfresh/pixtral-12b-8bit/tree/main but it seems the architecture is not supported by Ollama (yet). K:\AI\DewEfresh\pixtral-12b-8bit>ollama create Pixtral-12B-2409 K:\AI\DewEfresh\pixtral-12b-8bit>ollama create --quantize q8_K_M Pixtral-12B-2409 |
I tried to run the Pixtral with python code using an RTX 4060 with 16GB, but it was not possible :(. Perhaps it would work with a 4090 with 24GB.
|
Cant wait to try the gguf version of Pixtral man |
You need to quantize to run an 12b model on 16GB hardware. |
any news ? |
+1 for this feature |
any news ? |
please stop spamming here. multiple people are subscribed to this issue and patiently wait until it is done. (sorry for another mail, subscribers...) |
Anyone else checking the model library a couple times a day waiting for Pixtral, Llama3.2 9B and Molmo-7B to drop? 😄 |
im not sure if the team is actively working on multimodal support or if
they're focusing on something else at the moment. What is certain is that
multimodal capabilities will become increasingly essential in the near
future, and many users may switch to alternatives that offer this
functionality.
…On Tue, Oct 1, 2024, 23:11 Robert McDermott ***@***.***> wrote:
Anyone else checking the model library a couple times a day waiting for
Pixtral, Llama3.2 9B and Molmo-7B to drop? 😄
—
Reply to this email directly, view it on GitHub
<#6748 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIMKMGFPFQGUUDYTKWQV67TZZMFWZAVCNFSM6AAAAABOANEDJGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGOBXGA4DGMRUGQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Not with local models, I guess. |
How much long any hint pls? |
pretty much |
+1 to this feature :) |
there is no Llama3.2 9B THERE ARE 1, 3, 11, 90 |
yeah. Thanks. You know what we mean. Anyways: waiting patiently. |
Please do your Daily checks for "the drop" without spamming. Holy flipping cow. |
Just to point out an alternative for now - LM Studio just released with Pixtral support: |
@pbasov Do I understand it right that this is Apple-Silicone only? |
@oderwat I believe so, yes, since it's enabled by the MLX engine and llama.cpp still doesn't support it. But I'm sure ollama is going to get Pixtral support very soon, seeing that llama3.2 vision support is being rolled in 0.4 |
guys I ma save you some time. I learned that pixtral will COME by end of december. top secret. now stop wasting your time checking every few days |
Seems they are switching to their own inference engine as well, or at least for vision models. Honestly just wished they made some kind of patch set for llama cpp and maintained it, probably more productive imo if llama cpp doesn't want to add them currently |
👍 |
PLEASE! |
I'm pretty sure this is a thing that the LLM itself has to support correct me if I'm wrong |
Pixtral is avaialable for now?
Best,
+55 21 98244-8275
Em qui., 5 de dez. de 2024 às 09:12, fuggy ***@***.***>
escreveu:
… PLEASE! Allow us to compare 2 or more images via ollama API!
I'm pretty sure this is a thing that the LLM itself has to support correct
me if I'm wrong
—
Reply to this email directly, view it on GitHub
<#6748 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABL5ZOG2G7ESRYQ7JUQVJMT2EA7JRAVCNFSM6AAAAABOANEDJGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMRQGE2DQOJRGE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
oh my... I suggest delaying pixtral support by one week for every useless comment here... (sorry again, subscribers) unsubscribing |
Mistral AI just dropped Pixtral, their 12b model with vision support.
The text was updated successfully, but these errors were encountered: