You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After I setup the LD_LIBRARY_PATH variable as the instruction described, when I try to execute llama-cli from /build/bin, I encountered the following error:
CANNOT LINK EXECUTABLE "./llama-cli": cannot locate symbol "__emutls_get_address" referenced by "/data/data/com.termux/files/home/llama.cpp/build/ggml/src/libggml.so"...
After I unset the variable, I could execute the program again. But I'm not sure if this is using any GPU to accelerate the program.
What is going on by setting LD_LIBRARY_PATH to /vendor/lib64? And is there any alternative to use GPU other than setting this variable?
The text was updated successfully, but these errors were encountered:
After I setup the
LD_LIBRARY_PATH
variable as the instruction described, when I try to executellama-cli
from /build/bin, I encountered the following error:After I unset the variable, I could execute the program again. But I'm not sure if this is using any GPU to accelerate the program.
What is going on by setting LD_LIBRARY_PATH to /vendor/lib64? And is there any alternative to use GPU other than setting this variable?
The text was updated successfully, but these errors were encountered: