Skip to content

Commit

Permalink
Update How-to-use-different-LLM.md
Browse files Browse the repository at this point in the history
Corrected grammatical errors to remove ambiguity and improve professionalism.
  • Loading branch information
Ayush-Prabhu authored Oct 18, 2023
1 parent 49a4b11 commit d93266f
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions docs/pages/Guides/How-to-use-different-LLM.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
Fortunately, there are many providers for LLM's and some of them can even be run locally
Fortunately, there are many providers for LLMs, and some of them can even be run locally.

There are two models used in the app:
1. Embeddings.
2. Text generation.

By default, we use OpenAI's models but if you want to change it or even run it locally, it's very simple!
By default, we use OpenAI's models, but if you want to change it or even run it locally, it's very simple!

### Go to .env file or set environment variables:

Expand All @@ -31,6 +31,6 @@ Alternatively, if you wish to run Llama locally, you can run `setup.sh` and choo
That's it!

### Hosting everything locally and privately (for using our optimised open-source models)
If you are working with important data and don't want anything to leave your premises.
If you are working with critical data and don't want anything to leave your premises.

Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.
Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable, and for your `LLM_NAME`, you can use anything that is on Hugging Face.

0 comments on commit d93266f

Please sign in to comment.