diff --git a/docs/ai-chat.md b/docs/ai-chat.md index 0c8f89a97a..636cca96cc 100755 --- a/docs/ai-chat.md +++ b/docs/ai-chat.md @@ -23,7 +23,7 @@ Alternatively, you can run AI models locally so that your data never leaves your ## Hardware for Local AI Models -Local models are also fairly accessible. It's possible to run smaller models at lower speeds on as little as 8GB of RAM. Using more powerful hardware such as a dedicated GPU with sufficient VRAM or a modern system with fast LPDDR5X memory will offer the best experience. +Local models are also fairly accessible. It's possible to run smaller models at lower speeds on as little as 8GB of RAM. Using more powerful hardware such as a dedicated GPU with sufficient VRAM or a modern system with fast LPDDR5X memory offers the best experience. LLMs can usually be differentiated by the number of parameters, which can vary between 1.3B to 405B for open-source models available for end users. For example, models below 6.7B parameters are only good for basic tasks like text summaries, while models between 7B and 13B are a great compromise between quality and speed. Models with advanced reasoning capabilities are generally around 70B.