You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried this model, I had the same issue as with --load-in-4bit, it had a type conflict. you can try to load it yourself, without any extra arguments, it doesn't work. This is something that I think the model maker will need to fix, but if anyone knows a fix I would be happy to make the changes.
The model maker said "The issue arises during the image conversion process for the visual tokenizer. The preprocess_image function in the modeling_ovis.py script fails to properly convert the images to the required format or type for the visual tokenizer." They then said they got it to work. Maybe they would be willing to share how they fixed it.
It would be great to add this 4-bit quantized version of Ovis 1.6, to run on lower memory: https://huggingface.co/ThetaCursed/Ovis1.6-Gemma2-9B-bnb-4bit
The text was updated successfully, but these errors were encountered: