From b679d19ac25b4d5e1b96c2c3e30ef36b736e5fc3 Mon Sep 17 00:00:00 2001 From: Jonah Aragon <jonah@privacyguides.org> Date: Mon, 11 Nov 2024 23:12:24 -0600 Subject: [PATCH] Remove cloud providers --- docs/ai-chat.md | 126 ++++------------------------ docs/tools.md | 15 +--- theme/assets/img/ai-chat/duckai.svg | 2 - theme/assets/img/ai-chat/leo.svg | 2 - 4 files changed, 19 insertions(+), 126 deletions(-) delete mode 100644 theme/assets/img/ai-chat/duckai.svg delete mode 100644 theme/assets/img/ai-chat/leo.svg diff --git a/docs/ai-chat.md b/docs/ai-chat.md index dae42c56f9..69148c8c14 100755 --- a/docs/ai-chat.md +++ b/docs/ai-chat.md @@ -16,117 +16,39 @@ However, to improve the quality of LLMs, developers of AI software often use [Re <details class="admonition info" markdown> <summary>Ethical and Privacy Concerns about LLMs</summary> + AI models have been trained on massive amounts of public *and* private data. If you are concerned about these practices, you can either refuse to use AI or use [truly open-source models](https://proton.me/blog/how-to-build-privacy-first-ai), which publicly release their training datasets and therefore weren't trained on private data. One such model is [Olmoe](https://allenai.org/blog/olmoe) made by [Allenai](https://allenai.org/open-data). [Ethical concerns](https://www.thelancet.com/journals/landig/article/PIIS2588-7500(24)00061-X/fulltext) about AI range from their impact on climate to their potential for discrimination. -</details> - - -## Cloud Providers - -The AI chat cloud providers listed here do not train their models using your chats and do not retain your chats for more than a month, based on each service's privacy policy. However, there is **no guarantee** that these privacy policies are honored. Read our [full list of criteria](#criteria) for more information. - -When using cloud-based AI chat tools, be mindful of the personal information you share. Even if a service doesn't store your conversations, there's still a risk of sensitive data being exposed or misused. To protect your privacy and security, **do not share sensitive information** related to health, finance, or other highly personal matters. - -A quick **overview** of the two providers we recommend: - -| Feature | DuckDuckGo AI | Brave Leo | -|---------|---------------|-----------| -| Tor Access | :material-check:{ .pg-green } Official onion service | :material-alert-outline:{ .pg-orange } Android-only (Orbot) | -| Rate Limits | :material-check:{ .pg-green } High | :material-alert-outline:{ .pg-orange } Low-Medium[^1] | -| Self-hosted Models | :material-close:{ .pg-red } | :material-check:{ .pg-green } | -| Web Search Integration | :material-close:{ .pg-red } | :material-check:{ .pg-green } | -| Multi-language Support | :material-check:{ .pg-green } | :material-alert-outline:{ .pg-orange } Limited | -| Account Required | :material-close:{ .pg-red } | :material-close:{ .pg-red } | -| Mobile Support | :material-check:{ .pg-green } | :material-check:{ .pg-green } only on Brave | - -[^1]: Rate limits vary by model, with Llama having the lowest restrictions - -### DuckDuckGo AI Chat - -<div class="admonition recommendation" markdown> - -![DuckDuckGo logo](assets/img/ai-chat/duckai.svg){align=right} - -**DuckDuckGo AI Chat** is a web frontend for AI models. It is made by the popular [search engine provider](search-engines.md) of the same name. -It is available directly on [DuckDuckGo](https://duckduckgo.com), [duck.ai](https://duck.ai), or [DuckDuckGo onion site](https://duckduckgogg41xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion/chat). - -DuckDuckGo give you access to open-weights models from Meta and Mistral, as well as proprietary models from Anthropic and OpenAI. We strongly recommend you use open-weights models, because for those, no chat history is stored by Together.ai, the AI cloud platform DuckDuckGo uses to provide those models. -Furthermore, to protect your IP adress and prevent fingerprinting, DuckDuckGo proxies your chats through their servers. - -[:octicons-home-16: Homepage](https://duck.ai){ .md-button .md-button--primary } -[:simple-torbrowser:](https://duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion/chat){ .card-link title="Onion Service" } -[:octicons-eye-16:](https://duckduckgo.com/aichat/privacy-terms){ .card-link title="Privacy Policy" } -[:octicons-info-16:](https://help.duckduckgo.com){ .card-link title="Documentation" } - -</div> - -DuckDuckGo has agreements with their third-party providers that guarantee that they will not use your data for training their AI models. Proprietary model providers can keep a chat history for up to 30 days. For open-weights model, Duck uses the [together.ai](https://together.ai) AI cloud platform, and has disabled history for those chats. - -<div class="admonition danger" markdown> -<p class="admonition-title">Proprietary Model Providers Retain Your Chats</p> -We advise against using proprietary models from Anthropic or OpenAI because those providers keep a chat history for up to 30 days. -</div> -<div class="admonition warning" markdown> -<p class="admonition-title">DuckDuckGo Doesn't Self-Host Open Models</p> -You will have to trust the together.ai cloud platform to honor their commitments to not store chats. -</div> - -### Brave Leo - -<div class="admonition recommendation" markdown> - -![Brave Logo](assets/img/ai-chat/leo.svg){align=right} - -**Brave Leo** is an AI assistant available inside the [Brave browser](https://brave.com), a browser we [recommend](tools/#private-web-browsers). - -Brave Leo supports a variety of models, including open-weights models from Meta and Mistral, and proprietary models from Anthropic. We **strongly recommend** that you use **open-weights models**, because **Brave self-hosts them** and for those open-weights models, they **discards all chat data** after you close your session. -Additionally, the ["Bring Your Own Model"](https://brave.com/blog/byom-nightly/) (BYOM) feature allows you to use one of your local AI models directly in Brave. - -[:octicons-home-16: Homepage](https://brave.com/leo){ .md-button .md-button--primary } -[:octicons-eye-16:](https://brave.com/privacy/browser/#brave-leo){ .card-link title="Privacy Policy" } -[:octicons-info-16:](https://github.com/brave/brave-browser/wiki/Brave-Leo){ .card-link title="Documentation" } - -</div> -The default model is Mixtral, which has a low rate limit of 5 messages per hour. However, you can switch to the Llama model, which has "no" rate limits. - -Leo can enhance its knowledge through web searches, similar to Microsoft Copilot. However, Brave's AI solution still faces challenges with multi-language support and contextual understanding. - -<div class="admonition danger" markdown> -<p class="admonition-title">Page Content is Sent by Default</p> -By default, Brave Leo includes the webpage you are currently on as context for the AI model. While this can often be convenient, it also represents a privacy risk for pages with private information, such as your mailbox or social media. However, this feature cannot be globally disabled. Therefore, you'll need to **manually toggle off "Shape answers based on the page's contents"** for pages with PII. -</div> -<div class="admonition danger" markdown> -<p class="admonition-title">Proprietary Model Providers Retain Your Chats</p> -We advise against using Anthropic's Claude proprietary models because Anthropic keeps chat history for up to 30 days. -</div> - -## Local AI Chat +</details> **Running AI models locally** offers a more private and secure alternative to cloud-based solutions, as **your data never leaves your device** and is therefore never shared with third-party providers. This provides peace of mind and **allows you to share sensitive information**. For the best experience, a dedicated GPU with sufficient VRAM or a modern system with fast LPDDR5X memory is recommended. Fortunately, it is possible to run smaller models locally even without a high-end computer or dedicated GPU. A computer with at least 8GB of RAM will be sufficient to run smaller models at lower speeds. Below is a table with more precise information : + <details class="admonition info" markdown> <summary>Hardware Requirements for Local Models</summary> + Here are typical requirements for different model sizes: - 7B parameter models: 8GB RAM minimum, 16GB recommended - 13B parameter models: 16GB RAM minimum, 32GB recommended - 70B parameter models: Dedicated GPU with 24GB+ VRAM recommended - Quantized models (4-bit): Can run with roughly half these requirements -</details> +</details> **To run AI locally, you need both an AI client and an AI model**. -### Download AI models +## Downloading AI models There are many permissively licensed **models available to download**. **[Hugging Face](https://huggingface.co/models?library=gguf)** is a platform that lets you browse, research, and download models in common formats like GGUF. Companies that provide good open-weights models include big names like Mistral, Meta, Microsoft, and Google. But there are also many community models and 'fine-tunes' available. For consumer-grade hardware, it is generally recommended to use [quantized models](https://huggingface.co/docs/optimum/en/concept_guides/quantization) for the best balance between model quality and performance. To help you choose a model that fit your needs, you can look at leaderboards and benchmarks. The most widely-used leaderboard is [LM Arena](https://lmarena.ai/), a "Community-driven Evaluation for Best AI chatbots". There is also the [OpenLLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard), which focus on the performance of open-weights models on common benchmarks like MMLU-PRO. However, there are also specialed benchmarks, that for example measure [emotional intelligence](https://eqbench.com/), ["uncensored general intelligence"](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard), and many [others](https://www.nebuly.com/blog/llm-leaderboards). <details class="admonition warning" markdown> <summary>Model Security and Verification</summary> + When downloading AI models, especially from Hugging Face, it's important to verify their authenticity. Look for: - Model cards with clear documentation @@ -139,10 +61,10 @@ When downloading AI models, especially from Hugging Face, it's important to veri 3. Comparing this hash with the one you get after downloading (using tools like `sha256sum` on Linux/macOS or `certutil -hashfile file SHA256` on Windows) Those steps help ensure you're not downloading potentially malicious models. -</details> +</details> -### AI chat clients +## AI chat clients | Feature | [Kobold.cpp](#koboldcpp) | [Ollama](#ollama) | [Llamafile](#llamafile) | |---------|------------|---------|-----------| @@ -153,7 +75,7 @@ Those steps help ensure you're not downloading potentially malicious models. | Custom Parameters | :material-check:{ .pg-green } | :material-close:{ .pg-red } | :material-alert-outline:{ .pg-orange } | | Multi-platform | :material-check:{ .pg-green } | :material-check:{ .pg-green } | :material-alert-outline:{ .pg-orange } Size limitations on Windows | -#### Kobold.cpp +### Kobold.cpp <div class="admonition recommendation" markdown> @@ -186,7 +108,7 @@ Kobold shines best when you are looking for heavy customization and tweaking, su Kobold.cpp might not run on computers without AVX/AVX2 support. </div> -#### Ollama +### Ollama <div class="admonition recommendation" markdown> @@ -202,9 +124,11 @@ In addition to supporting a wide range of text models, Ollama also supports [LLa <details class="downloads" markdown> <summary>Downloads</summary> + - [:fontawesome-brands-windows: Windows](https://ollama.com/download/linux) - [:simple-apple: macOS](https://ollama.com/download/mac) - [:simple-linux: Linux](https://ollama.com/download/linux) + </details> </div> @@ -213,7 +137,7 @@ Ollama shines best when you're looking for an AI client that has great compatibi It also simplifies the process of setting up a local AI chat, as it downloads the AI model you want to use automatically. For example, running `ollama run llama3.2` will automatically download and run the Llama 3.2 model. Furthermore, ollama maintains their own [model library](https://ollama.com/library/) where they host the files of various AI models. This ensures models are vetted for both performance and security, eliminating the need to manually verify model authenticity. -#### Llamafile +### Llamafile <div class="admonition recommendation" markdown> @@ -229,14 +153,18 @@ The Mozilla-run project also supports LLaVA. However, it does not support speech [:octicons-lock-16:](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#security){ .card-link title="Security Policy" } <details class="downloads" markdown> <summary>Downloads</summary> + - [:fontawesome-solid-desktop: Desktop](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#quickstart) + </details> </div> <div class="admonition note" markdown> <p class="admonition-title">Few Models Available</p> + Mozilla has only made llamafiles available for some Llama and Mistral models, while there are few third-party llamafiles available. Another issue is that Windows limits .exe files to 4GB, and most models are larger than that. To fix both of those issues, you can [load external weights](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#using-llamafile-with-external-weights). + </div> ## Criteria @@ -245,16 +173,6 @@ Please note we are not affiliated with any of the projects we recommend. In addi ### Minimum Requirements -#### Cloud Providers - -- The provider or third-parties they use must not use your chats for training. -- The provider or third-parties they use must not retain your chats for more than 30 days. -- Must be accessible privately (no account required, accepts requests from VPN users). -- Must provide models they host themselves or with a third-party that acts on their behalf. -- Must provide at least one model with high rate limits, to allow an user to use it for medium to heavy workloads. - -#### Local AI clients - - Must be open-source. - Must not send personal data, including chat data. - Must be available on Linux. @@ -266,14 +184,6 @@ Please note we are not affiliated with any of the projects we recommend. In addi Our best-case criteria represent what we *would* like to see from the perfect project in this category. Our recommendations may not include any or all of this functionality, but those which do may rank higher than others on this page. -#### Cloud Providers - -- Should not retain your chats. -- Should be accessible anonymously trough Tor. -- Should only offer self-hosted open-weights models. -- Should not be rate-limited. - -#### Local AI clients - Should be multi-platform. - Should be easy to download and set up, such as having a one-click install process. - Should have a built-in model downloader option. diff --git a/docs/tools.md b/docs/tools.md index 673c8c37f5..b7f34a1b38 100644 --- a/docs/tools.md +++ b/docs/tools.md @@ -472,19 +472,6 @@ For encrypting your OS drive, we typically recommend using the encryption tool y ### AI Chat -#### Cloud Providers - -<div class="grid cards" markdown> - -- ![Duck AI logo](assets/img/ai-chat/duckai.svg){ .twemoji loading=lazy }[Duck AI](ai-chat.md#duckduckgo-ai-chat) -- ![Leo AI logo](assets/img/ai-chat/leo.svg){ .twemoji loading=lazy }[Brave Leo](ai-chat.md#brave-leo) - -</div> - -[Learn more :material-arrow-right-drop-circle:](ai-chat.md#cloud-providers) - -#### Local AI - <div class="grid cards" markdown> - ![Kobold logo](assets/img/ai-chat/kobold.png){ .twemoji loading=lazy }[Kobold.cpp](ai-chat.md#koboldcpp) @@ -493,7 +480,7 @@ For encrypting your OS drive, we typically recommend using the encryption tool y </div> -[Learn more :material-arrow-right-drop-circle:](ai-chat.md#local-ai-chat) +[Learn more :material-arrow-right-drop-circle:](ai-chat.md) ### Language Tools diff --git a/theme/assets/img/ai-chat/duckai.svg b/theme/assets/img/ai-chat/duckai.svg deleted file mode 100644 index f747a6e345..0000000000 --- a/theme/assets/img/ai-chat/duckai.svg +++ /dev/null @@ -1,2 +0,0 @@ -<?xml version="1.0" encoding="UTF-8"?> -<svg version="1.1" viewBox="0 0 96 96" xmlns="http://www.w3.org/2000/svg"><path d="M71 51c-11.046 0-20 8.954-20 20a3.06 3.06 0 0 1-2.225 2.953c-7.217 2.028-15.242 3.905-21.141 5.21a3.095 3.095 0 0 1-.932.067H22V75h2.376c.114-.224.261-.442.443-.65l7.28-8.32C25.34 60.904 21 52.941 21 44c0-13.478 9.863-24.732 23-27.4V16h7v.016C66.553 16.526 79 28.86 79 44c0 1.093-.065 2.17-.19 3.23-.269 2.25-2.3 3.77-4.565 3.77H71Z" fill="#876ECB"/><path d="M71 51c-11.248 0-19.63 9.18-19.988 20.04-.014.43-.287.814-.697.947-8.443 2.744-19.908 5.456-27.68 7.177-2.799.62-4.704-2.657-2.816-4.814l5.161-5.898c1.145-1.31.929-3.306-.331-4.505C19.309 58.87 16 51.807 16 44c0-15.464 12.984-28 29-28s29 12.536 29 28c0 2.417-.317 4.763-.914 7H71Z" fill="#C7B9EE"/><path d="m36 44a5 5 0 1 1-10 0 5 5 0 0 1 10 0zm14 0a5 5 0 1 1-10 0 5 5 0 0 1 10 0zm9 5a5 5 0 1 0 0-10 5 5 0 0 0 0 10z" clip-rule="evenodd" fill="#fff" fill-rule="evenodd"/><path d="M92.501 59c.298 0 .595.12.823.354.454.468.454 1.23 0 1.698l-2.333 2.4a1.145 1.145 0 0 1-1.65 0 1.227 1.227 0 0 1 0-1.698l2.333-2.4c.227-.234.524-.354.822-.354h.005Zm-1.166 10.798h3.499c.641 0 1.166.54 1.166 1.2 0 .66-.525 1.2-1.166 1.2h-3.499c-.641 0-1.166-.54-1.166-1.2 0-.66.525-1.2 1.166-1.2Zm-1.982 8.754c.227-.234.525-.354.822-.354h.006c.297 0 .595.12.822.354l2.332 2.4c.455.467.455 1.23 0 1.697a1.145 1.145 0 0 1-1.65 0l-2.332-2.4a1.227 1.227 0 0 1 0-1.697Z" fill="#CCC"/><rect x="55" y="55" width="32" height="32" rx="16" fill="#DE5833"/><path d="M71 57.044c-7.708 0-13.956 6.248-13.956 13.956 0 7.707 6.248 13.956 13.956 13.956 7.707 0 13.956-6.249 13.956-13.956 0-7.708-6.249-13.956-13.956-13.956ZM58.956 71c0-6.652 5.392-12.044 12.044-12.044 6.651 0 12.044 5.392 12.044 12.044 0 5.892-4.232 10.796-9.822 11.84-1.452-3.336-2.966-7.33-1.485-7.772-1.763-3.18-1.406-5.268 2.254-4.624h.005c.41.047.721.082.818.02.496-.315.189-7.242-4.114-8.182-3.96-4.9-7.73.688-5.817.306 1.529-.382 2.665-.03 2.612-.014-6.755.852-3.614 11.495-1.88 17.369a82.9 82.9 0 0 1 .606 2.116c-4.275-1.85-7.265-6.105-7.265-11.059Z" clip-rule="evenodd" fill="#fff" fill-rule="evenodd"/><path d="M76.29 81.09c-.043.274-.137.457-.306.482-.319.05-1.747-.278-2.56-.587-.092.425-2.268.827-2.613.257-.79.682-2.302 1.673-2.619 1.465-.605-.396-1.175-3.45-.72-4.096.693-.63 2.15.055 3.171.417.347-.586 2.024-.808 2.372-.327.917-.697 2.448-1.68 2.597-1.501.745.897.839 3.03.678 3.89Z" fill="#4CBA3C"/><path d="M68.53 71.87c.311-2.216 4.496-1.523 6.368-1.772a12.11 12.11 0 0 0 3.05-.755c1.547-.636 1.811-.005 1.054.985-2.136 2.533-6.889.69-7.74 2-.248.388-.056 1.301 1.899 1.589 2.64.388 4.81-.468 5.079.05-.603 2.764-10.63 1.823-9.712-2.097h.001Z" clip-rule="evenodd" fill="#FC3" fill-rule="evenodd"/><path d="M73.871 65.48c-.277-.6-1.7-.596-1.972-.024-.025.118.075.087.263.028.331-.104.938-.295 1.636.078.055.024.109-.033.073-.083Zm-6.954.143c-.264-.019-.693-.05-1.048.147-.52.222-.688.46-.788.624-.037.06-.181.054-.181-.017.035-.954 1.653-1.414 2.241-.821.072.089-.033.081-.224.067Zm6.447 3.199c-1.088-.005-1.088-1.684 0-1.69 1.09.006 1.09 1.685 0 1.69Zm-5.517-.26c-.021 1.294-1.92 1.294-1.94 0 .005-1.289 1.934-1.288 1.94 0Z" fill="#14307E"/></svg> diff --git a/theme/assets/img/ai-chat/leo.svg b/theme/assets/img/ai-chat/leo.svg deleted file mode 100644 index 8e1ef5e770..0000000000 --- a/theme/assets/img/ai-chat/leo.svg +++ /dev/null @@ -1,2 +0,0 @@ -<?xml version="1.0" encoding="UTF-8"?> -<svg fill="none" version="1.1" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"><path d="M11.352 2.005a2.234 2.234 0 0 0-2.168 1.693l-.49 1.963A4.167 4.167 0 0 1 5.66 8.693l-1.963.491a2.234 2.234 0 0 0 0 4.335l1.963.491a4.167 4.167 0 0 1 3.032 3.032l.491 1.964a2.234 2.234 0 0 0 4.335 0l.491-1.964a4.166 4.166 0 0 1 3.032-3.032l1.964-.49a2.234 2.234 0 0 0 0-4.336l-1.964-.49A4.167 4.167 0 0 1 14.01 5.66l-.49-1.963a2.234 2.234 0 0 0-2.168-1.693Zm-.593 2.086a.61.61 0 0 1 1.185 0l.491 1.964a5.79 5.79 0 0 0 4.213 4.213l1.964.491a.61.61 0 0 1 0 1.185l-1.964.491a5.79 5.79 0 0 0-4.213 4.213l-.49 1.964a.61.61 0 0 1-1.186 0l-.49-1.964a5.79 5.79 0 0 0-4.214-4.213l-1.964-.49a.61.61 0 0 1 0-1.186l1.964-.49a5.79 5.79 0 0 0 4.213-4.214l.491-1.964Zm8.307 11.35a.583.583 0 0 0-1.132 0l-.201.806a2.041 2.041 0 0 1-1.486 1.486l-.805.201a.583.583 0 0 0 0 1.132l.805.201a2.041 2.041 0 0 1 1.486 1.486l.201.805a.583.583 0 0 0 1.132 0l.201-.805a2.041 2.041 0 0 1 1.486-1.486l.805-.201a.583.583 0 0 0 0-1.132l-.805-.201a2.041 2.041 0 0 1-1.486-1.486l-.201-.805Z" clip-rule="evenodd" fill="#62757E" fill-rule="evenodd"/></svg>