From 496ffcfea91c2652aedeaf1aaef2b902b4f9b77c Mon Sep 17 00:00:00 2001 From: redoomed1 <161974310+redoomed1@users.noreply.github.com> Date: Fri, 15 Nov 2024 09:15:51 -0800 Subject: [PATCH] Apply suggestion from code review Co-Authored-By: fria <138676274+friadev@users.noreply.github.com> Co-Authored-By: Triple T <78900789+I-I-IT@users.noreply.github.com> Signed-off-by: redoomed1 <161974310+redoomed1@users.noreply.github.com> --- docs/ai-chat.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/ai-chat.md b/docs/ai-chat.md index 0c8f89a97a..636cca96cc 100755 --- a/docs/ai-chat.md +++ b/docs/ai-chat.md @@ -23,7 +23,7 @@ Alternatively, you can run AI models locally so that your data never leaves your ## Hardware for Local AI Models -Local models are also fairly accessible. It's possible to run smaller models at lower speeds on as little as 8GB of RAM. Using more powerful hardware such as a dedicated GPU with sufficient VRAM or a modern system with fast LPDDR5X memory will offer the best experience. +Local models are also fairly accessible. It's possible to run smaller models at lower speeds on as little as 8GB of RAM. Using more powerful hardware such as a dedicated GPU with sufficient VRAM or a modern system with fast LPDDR5X memory offers the best experience. LLMs can usually be differentiated by the number of parameters, which can vary between 1.3B to 405B for open-source models available for end users. For example, models below 6.7B parameters are only good for basic tasks like text summaries, while models between 7B and 13B are a great compromise between quality and speed. Models with advanced reasoning capabilities are generally around 70B.