Skip to content
Seunghoon Lee edited this page Jul 12, 2024 · 119 revisions

ZLUDA (CUDA Wrapper) for AMD GPUs in Windows

Warning

ZLUDA does not fully support PyTorch in its official build. So ZLUDA support is so tricky and unstable. Support is limited at this time. Please don't create issues regarding ZLUDA on GitHub. Feel free to reach out via the ZLUDA thread in the help channel on discord.

Installing ZLUDA for AMD GPUs in Windows.

Note

This guide assumes you have Git and Python installed, and are comfortable using the command prompt, navigating Windows Explorer, renaming files and folders, and working with zip files.

If you have an integrated AMD GPU (iGPU), you may need to disable it, or use the HIP_VISIBLE_DEVICES environment variable. Learn more here.

Install Visual C++ Runtime

Note: Most everyone would have this anyway, since it comes with a lot of games, but there's no harm in trying to install it.

Grab the latest version of Visual C++ Runtime from https://aka.ms/vs/17/release/vc_redist.x64.exe (this is a direct download link) and then run it.
If you get the options to Repair or Uninstall, then you already have it installed and can click Close. Otherwise, install it.

Install ZLUDA

ZLUDA is now auto-installed, and automatically added to PATH, when starting webui.bat with --use-zluda.

Install HIP SDK

Install HIP SDK 5.7.1 from https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html
(Note: ZLUDA will currently only function with HIP SDK version 5.7.1. For experimental HIP SDK version 6 support, switch to zluda-rocm6 branch.)
So long as your regular AMD GPU driver is up to date, you don't need to install the PRO driver HIP SDK suggests.

Replace HIP SDK library files for unsupported GPU architectures.

Go to https://rocm.docs.amd.com/projects/install-on-windows/en/develop/reference/system-requirements.html and find your GPU model.
If your GPU model has a ✅ in both columns then skip to Install SD.Next.
If your GPU model has an ❌ in the HIP SDK column, or if your GPU isn't listed, follow the instructions below;

  1. Open Windows Explorer and copy and paste C:\Program Files\AMD\ROCm\5.7\bin\rocblas into the location bar.
    (Assuming you've installed the HIP SDK in the default location and Windows is located on C:).
  2. Make a copy of the library folder, for backup purposes.
  3. Download one of the following files, and unzip them in the original library folder, overwriting any files there.
    Note: Thanks to FremontDango, these alternate libraries for gfx1031 and gfx1032 GPUs are about 50% faster;

(Note: You may have to install 7-Zip to unzip the .7z files.)

  1. Open the zip file.
  2. Drag and drop the library folder from zip file into %HIP_PATH%bin\rocblas (The folder you opened in step 1).
  3. Reboot PC

If your GPU model not in the HIP SDK column or not available in the above list, follow the instructions in [Rocm Support guide] (https://github.com/vladmandic/automatic/wiki/Rocm-Support) to build your own RocblasLibs.
(Note: Building your own libraries is not for the faint of heart.)

Install SD.Next

Using Windows Explorer, navigate to a place you'd like to install SD.Next. This should be a folder which your user account has read/write/execute access to. Installing SD.Next in a directory which requires admin permissions may cause it to not launch properly. Refrain from installing SD.Next into the Program Files, Users, or Windows folders, this includes the OneDrive folder or on the Desktop.

The best place would be on an SSD for model loading.

In the Location Bar, type cmd, then hit [Enter]. This will open a Command Prompt window at that location.

image

Copy and paste the following commands into the Command Prompt window, one at a time;
git clone https://github.com/vladmandic/automatic
then
cd automatic
then
webui.bat --use-zluda --debug --autolaunch

Note: ZLUDA functions best in Diffusers Backend, where certain Diffusers-only options are available.

Compilation, Settings, and First Generation

After the UI starts, head on over to System Tab > Compute Settings
Set "Attention optimization method" to "Dynamic Attention BMM", then click Apply settings.
Now, try to generate something.
This should take a fair while (10-15mins, or even longer; some reports state over an hour) to compile, but this compilation should only need to be done once.
Note: There will be no progress bar, as this is done by ZLUDA and not SD.Next. Eventually your image will start generating.


Comparison (DirectML)

DirectML ZLUDA
Speed Slower Faster
VRAM Usage More Less
VRAM GC
Traning *
Flash Attention
FFT
FFTW
DNN 🚧
RTC
Source Code Closed Opened
Python <=3.12 Same as CUDA

*: Known as possible, but uses too much VRAM to train stable diffusion models/LoRAs/etc.

Compatibility

DTYPE
FP64
FP32
FP16
BF16
LONG
INT8 ✅*
UINT8 ✅*
INT4
FP8
BF8

*: Not tested.


Experimental Settings

Sections below are optional and highly experimental, and aren't required to start generating images. Ensure you can generate images first before trying these.

Experimental Speed Increase Using deepcache (optional)

Start SD.Next, head on over to System Tab > Compute Settings.
Scroll down to "Model Compile" and tick the 'Model', 'VAE', and 'Text Encoder' boxes.
Select "deep-cache" as your Model compile backend.
Apply and Shutdown, and restart SD.Next.

Clone this wiki locally