Skip to content

Commit

Permalink
docs: placeholder for model downloads folder (#446)
Browse files Browse the repository at this point in the history
  • Loading branch information
timothycarambat authored Dec 14, 2023
1 parent d094cc3 commit 1e98da0
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 2 deletions.
3 changes: 2 additions & 1 deletion server/storage/models/.gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
Xenova
downloaded/*
downloaded/*
!downloaded/.placeholder
6 changes: 5 additions & 1 deletion server/storage/models/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,8 @@ If you would like to use a local Llama compatible LLM model for chatting you can
> If running in Docker you should be running the container to a mounted storage location on the host machine so you
> can update the storage files directly without having to re-download or re-build your docker container. [See suggested Docker config](../../../README.md#recommended-usage-with-docker-easy)
All local models you want to have available for LLM selection should be placed in the `storage/models/downloaded` folder. Only `.gguf` files will be allowed to be selected from the UI.
> [!NOTE]
> `/server/storage/models/downloaded` is the default location that your model files should be at.
> Your storage directory may differ if you changed the STORAGE_DIR environment variable.
All local models you want to have available for LLM selection should be placed in the `server/storage/models/downloaded` folder. Only `.gguf` files will be allowed to be selected from the UI.
1 change: 1 addition & 0 deletions server/storage/models/downloaded/.placeholder
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
All your .GGUF model file downloads you want to use for chatting should go into this folder.

0 comments on commit 1e98da0

Please sign in to comment.