You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Whenever I try to load/run a model by giving an absolute path to model – which is stored on a btrfs subvolume – llamafile can not load the model and instead says that the path is a directory.
This does only happen if the btrfs subvolume is part of the given path, like this for example won't work:
/run/media/yazan/NVME-2TB/@data-models/llms/google/gemma-2-9b/gemma-2-9b-Q8.0.gguf: failed to load model
It will also not work for this:
cd /run/media/yazan/NVME-2TB
llamafile --verbose -m ./@data-models/llms/google/gemma-2-9b/gemma-2-9b-Q8.0.gguf
again:
./@data-models/llms/google/gemma-2-9b/gemma-2-9b-Q8.0.gguf: failed to load model
But changing directory into the subvolume and give the relative path from there works fine:
cd /run/media/yazan/NVME-2TB/@data-models
llamafile --verbose -m ./llms/google/gemma-2-9b/gemma-2-9b-Q8.0.gguf
now i have:
note: if you have an AMD or NVIDIA GPU then you need to pass -ngl 9999 to enable GPU offloading
llama_model_loader: loaded meta data with 25 key-value pairs and 464 tensors from ./llms/google/gemma-2-9b/gemma-2-9b-Q8.0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
#...
INFO [ server_cli] HTTP server listening | hostname="127.0.0.1" port="8080" tid="139674262733904" timestamp=1734423119 url_prefix=""
software: llamafile 0.8.17
model: gemma-2-9b-Q8.0.gguf
mode: RAW TEXT COMPLETION (base model)
compute: Intel Core i9-9900KF CPU @ 3.60GHz (skylake)
server: http://127.0.0.1:8080/
#...
type text to be completed (or /help for help)
Version
llamafile v0.8.17
What operating system are you seeing the problem on?
Linux
Relevant log output
Here are some specs:
openSUSE Tumbleweed 20241211 x86_64
Linux 6.11.8-1-default
bash 5.2.37
And as mentioned I use btrfs as my filesystem – for both for my secondary data storage as well as for the os
llamafile --verbose -m /run/media/yazan/NVME-2TB/@data-models/llms/google/gemma-2-9b/gemma-2-9b-Q8.0.gguf
██╗ ██╗ █████╗ ███╗ ███╗ █████╗ ███████╗██╗██╗ ███████╗
██║ ██║ ██╔══██╗████╗ ████║██╔══██╗██╔════╝██║██║ ██╔════╝
██║ ██║ ███████║██╔████╔██║███████║█████╗ ██║██║ █████╗
██║ ██║ ██╔══██║██║╚██╔╝██║██╔══██║██╔══╝ ██║██║ ██╔══╝
███████╗███████╗██║ ██║██║ ╚═╝ ██║██║ ██║██║ ██║███████╗███████╗
╚══════╝╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝╚══════╝╚══════╝
note: if you have an AMD or NVIDIA GPU then you need to pass -ngl 9999 to enable GPU offloading
/run/media/yazan/NVME-2TB/: warning: failed to read last 64kb of file: Is a directory
llama_model_load: error loading model: failed to open /run/media/yazan/NVME-2TB/@data-models/llms/google/gemma-2-9b/gemma-2-9b-Q8.0.gguf: Is a directory
llama_load_model_from_file: failed to load model
/run/media/yazan/NVME-2TB/@data-models/llms/google/gemma-2-9b/gemma-2-9b-Q8.0.gguf: failed to load model
change to NVME-2TB:
cd /run/media/yazan/NVME-2TB && llamafile --verbose -m ./@data-models/llms/google/gemma-2-9b/gemma-2-9b-Q8.0.gguf
██╗ ██╗ █████╗ ███╗ ███╗ █████╗ ███████╗██╗██╗ ███████╗
██║ ██║ ██╔══██╗████╗ ████║██╔══██╗██╔════╝██║██║ ██╔════╝
██║ ██║ ███████║██╔████╔██║███████║█████╗ ██║██║ █████╗
██║ ██║ ██╔══██║██║╚██╔╝██║██╔══██║██╔══╝ ██║██║ ██╔══╝
███████╗███████╗██║ ██║██║ ╚═╝ ██║██║ ██║██║ ██║███████╗███████╗
╚══════╝╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝╚══════╝╚══════╝
note: if you have an AMD or NVIDIA GPU then you need to pass -ngl 9999 to enable GPU offloading
./: warning: failed to read last 64kb of file: Is a directory
llama_model_load: error loading model: failed to open ./@data-models/llms/google/gemma-2-9b/gemma-2-9b-Q8.0.gguf: Is a directory
llama_load_model_from_file: failed to load model
./@data-models/llms/google/gemma-2-9b/gemma-2-9b-Q8.0.gguf: failed to load model
Contact Details
[email protected]
What happened?
Whenever I try to load/run a model by giving an absolute path to model – which is stored on a btrfs subvolume – llamafile can not load the model and instead says that the path is a directory.
This does only happen if the btrfs subvolume is part of the given path, like this for example won't work:
leads to:
It will also not work for this:
cd /run/media/yazan/NVME-2TB llamafile --verbose -m ./@data-models/llms/google/gemma-2-9b/gemma-2-9b-Q8.0.gguf
again:
But changing directory into the subvolume and give the relative path from there works fine:
cd /run/media/yazan/NVME-2TB/@data-models llamafile --verbose -m ./llms/google/gemma-2-9b/gemma-2-9b-Q8.0.gguf
now i have:
Version
llamafile v0.8.17
What operating system are you seeing the problem on?
Linux
Relevant log output
Here are some specs:
And as mentioned I use btrfs as my filesystem – for both for my secondary data storage as well as for the os
And here the llamafile outputs:
change to NVME-2TB:
change to subvolume @data-models:
end
edit: typos
The text was updated successfully, but these errors were encountered: