-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any plans for adding gfx10+ support? #648
Comments
I second this feature request, as I have an 8GB 5500 XT for machine learning applications. PyTorch recently added hipBLASLt as a hard requirement to build from version 2.3+ if someone has ROCm 5.7+. Unless the PyTorch developers make it optional (issue currently ongoing), I and other users will be forced to downgrade to 2.2.2, the latest release that doesn't have this prerequisite. |
Here's my attempt to force the compilation of hipBLASLt for my 5500 XT, which uses the gfx1012 architecture. It seemed to hit a brick wall when creating the ExtOp libraries. Unless someone in the community is knowledgeable about how AMD GPUs work at the hardware level and provides unofficial patches for the gfx101x/gfx103x arches, I doubt they'll include them for the foreseeable future.
|
@TheTrustedComputer, if I read code correctly, hipBLASLt and rocWMMA are tied to either mfma (gfx9) or wmma (gfx11) instruction set. You can either build hipBLASLt with |
@AngryLoki Do you mean any supported GPU architecture? I built it for mine, I also appreciate your clarification regarding PyTorch's hipBLASLt requirement. PyTorch has an environment variable Gentoo's patch of hipBLASLt as a dummy library is an interesting workaround; I'll probably give that a try. Thanks! |
@TheTrustedComputer , build for random supported architecture (e. g. gfx940). Pytorch will attempt to load hipblaslt, it will discover that in was not compiled for your current GPU (and it is technically impossible to compile it) and it will automatically fallback to old hipBLAS code (used in pytorch-2.2.2). There is no need to set |
I cannot use rx6xxx cards anymore for LLM fine tuning with the new hipblaslt requirement. Are there any plans to add support in the future?
The text was updated successfully, but these errors were encountered: