-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixed rocm permission error and updated rocm containerfile #376
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -16,23 +16,28 @@ RUN curl --retry 8 --retry-all-errors -o \ | |
cat /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Official | ||
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Official | ||
|
||
# Set amd gpu architecture for RDNA3 | ||
# https://llvm.org/docs/AMDGPUUsage.html#processors | ||
ENV AMDGPU_TARGETS=gfx1100 | ||
|
||
RUN dnf install -y rocm-dev hipblas-devel rocblas-devel && \ | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How does this work without the dnf install's for the ROCm libs/headers hmmm There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Apologies and good catch! I realized this just after making the PR! I decided to make a change manually to keep as close to main as possible but accidentally deleted those lines. |
||
dnf clean all && \ | ||
git clone https://github.com/ggerganov/llama.cpp && \ | ||
cd llama.cpp && \ | ||
git reset --hard ${LLAMA_CPP_SHA} && \ | ||
cmake -B build -DCMAKE_INSTALL_PREFIX:PATH=/usr -DGGML_CCACHE=0 \ | ||
-DGGML_HIPBLAS=1 && \ | ||
cmake --build build --config Release -j $(nproc) && \ | ||
-DGGML_HIPBLAS=ON -DAMDGPU_TARGETS=${ROCM_DOCKER_ARCH} && \ | ||
cmake --build build --config Release -j$(nproc) && \ | ||
cmake --install build && \ | ||
cd / && \ | ||
git clone https://github.com/ggerganov/whisper.cpp.git && \ | ||
cd whisper.cpp && \ | ||
git reset --hard ${WHISPER_CPP_SHA} && \ | ||
make -j $(nproc) GGML_HIPBLAS=1 && \ | ||
mv main /usr/bin/whisper-main && \ | ||
mv server /usr/bin/whisper-server && \ | ||
cmake -B build -DCMAKE_INSTALL_PREFIX:PATH=/usr -DGGML_CCACHE=0 \ | ||
-DGGML_HIPBLAS=ON -DAMDGPU_TARGETS=${ROCM_DOCKER_ARCH} && \ | ||
cmake --build build --config Release -j$(nproc) && \ | ||
mv build/bin/main /usr/bin/whisper-main && \ | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Might be simpler to do a: cmake --install build and do two mv's haven't tried. The reason the mv's are there is these two binaries are badly named. The llama.cpp equivalents used to also be named main and server, so you couldn't have both installed in the same directory because they have the same names. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That makes sense I tried it with cmake --install build and then the mv's but it didnt work. could be the second cmake --install overrides it still There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe something like this:
the install doesn't cover the examples |
||
mv build/bin/server /usr/bin/whisper-server && \ | ||
cd / && \ | ||
rm -rf /var/cache/*dnf* /opt/rocm-*/lib/llvm \ | ||
/opt/rocm-*/lib/rocblas/library/*gfx9* llama.cpp whisper.cpp | ||
|
||
/opt/rocm-*/lib/rocblas/library/*gfx9* llama.cpp whisper.cpp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder should we add like llama.cpp upstream containers if it doesn't cause issues:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, I checked the llama.cpp repo docs and that's how the official rocm dockerfile does it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you are curious why we leave out gfx9* ones, it's because the container images were getting humongous and we were hitting limits.
It would also take an age for users to download huge images, so we had to trim.
I wouldn't be against creating a rocm-gfx9 image in future to solve that problem though, if people complained their older AMD GPUs don't work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense, I hooked up an old radeon 5450 gfx1036 to my rig and edited the llama.cpp arguments to use it but it ran out of memory it only had 512 mb. I have a vega 64 lying around I can give that a shot if it fits into my pc!
In general @ericcurtin sorry for the PR mess I'm wondering once I confirm compatibility should I make a new PR?