From fe8b5d4e7502b75637fcdc3ac5b6ab385f73c9be Mon Sep 17 00:00:00 2001 From: Matt <77928207+mattzcarey@users.noreply.github.com> Date: Fri, 30 Aug 2024 09:29:50 +0100 Subject: [PATCH] chore: add llama.js to readme --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index bb2b93a35021f..eaddbbd758717 100644 --- a/README.md +++ b/README.md @@ -126,6 +126,7 @@ Typically finetunes of the base models below are supported as well. - Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python) - Go: [go-skynet/go-llama.cpp](https://github.com/go-skynet/go-llama.cpp) +- Bun (experimental): [mattzcarey/llama.js](https://github.com/mattzcarey/llama.js) - Node.js: [withcatai/node-llama-cpp](https://github.com/withcatai/node-llama-cpp) - JS/TS (llama.cpp server client): [lgrammel/modelfusion](https://modelfusion.dev/integration/model-provider/llamacpp) - JavaScript/Wasm (works in browser): [tangledgroup/llama-cpp-wasm](https://github.com/tangledgroup/llama-cpp-wasm)