MacBook Air M1 for AI #1472
Replies: 2 comments
-
I'm not sure what you mean by "state-of-the-art" fine-tuning. If you are asking if you can fine-tune an LLM, yes definitely with a low-rank method and optionally a quantized model. You could for example fine-tune small to medium sized models (maybe up to 7b with QLoRA). It also depends on how big of a dataset you need to fine-tune on. If it's pretty small (like a few hundred samples, it should work fine and not take too long). Check out fine-tuning in MLX LM for more info on that. |
Beta Was this translation helpful? Give feedback.
-
Thanks a lot, yes I meant that finetuning tasks, |
Beta Was this translation helpful? Give feedback.
-
is the MacBook Air M1 with 36 GB Unified-RAM Good for state-of-the-art finetuning ?
Beta Was this translation helpful? Give feedback.
All reactions