Replies: 1 comment
-
The tests I've seen with M3 suggested at least an order of magnitude difference in training speed, so M4 likely won't be much better than that. It will also depend on Pytorch's performance with MPS. So far Apple sillicon is great for inference (plus finetuning a bit), less so for training. But it's also because the software stack behind it hasn't fully matured yet. If you're serious about training large models, go with a discrete accelerator (GPU, TPU). If you're casually training small stuff and the rest is mostly inference (small and large), Apple sillicon is great! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Anyone can say how is the perf in comparison to RTX 3090 or 4090 on training models in neuralforecast?
Beta Was this translation helpful? Give feedback.
All reactions