-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Requirements and iOS Support? #2
Comments
It should, haven't tried. Consumes about 4GiB. |
Thanks. How would one go about converting the model to fp16? |
You can load f32 weights into a f16 model and then write the f16 model weights out. Current txt2img/main.swift example loads f32 weights into f16 model. You can then just add additional:
and the new file will be f16 weights. |
Would this decrease memory usage under 3gb? Thanks again for the help! |
No. Model already running at f16 with f16 weights. It just affects files stored on disk. FP16 already runs at around 3.3GiB including all the models (text model, unet and decoder). The extra 0.7 is from MPS implicit tensors which can be further reduced by just running one op at a time (slower). |
Have the memory requirements changed since being published on the App Store? |
Dose this run on mobile devices? If so, what are the memory requierments?
The text was updated successfully, but these errors were encountered: