-
Ensure Python 3.11 is installed:
- You can check your Python version by running:
python3.11 --version
- If Python 3.11 is not installed, you will need to install it first.
brew install [email protected]
-
Navigate to your project directory and Create the virtual environment:
- Run:
python3.11 -m venv .env
-
Activate the virtual environment:
- Run:
source .env/bin/activate
-
Install deps, build and run:
- Run:
pip install -r ./backend/requirements.txt npm install && npm run build && npm run dev & ./backend/start.sh open http://localhost:5173
- This should kick off two servers from /backend and /backend/pipelines at 8080 and 9099, respectively.
-
Set up pipelines to access models via API:
- Login to Open WebUI. In Open WebUI: Navigate to the Admin Panel > Settings > Connections > OpenAI API section. Set the API URL to http://localhost:9099 and the API key to 0p3n-w3bu!. You should now be connected to the pipelines server.
- Load some pipelines. In Open WebUI Navigate to the Admin Panel > Settings > Connections > Pipelines section. Hit upload and select a .py file from the repo /backend/open-webui-pipelines/examples/pipelines/providers. The .env file contains keys for the
azure_openai...
andbedrock_claude...
pipelines. Hit the little upload button next to the filename display (doesn't do it automatically...). Go to "Pipelines Valves" and see that the vars are correctly detected from your venv env vars. - After completing these steps, the model specified in the pipeline should be available in the drop down at the upper left when you create a new conversation.
Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs. For more information, be sure to check out our Open WebUI Documentation.
-
π Effortless Setup: Install seamlessly using Docker or Kubernetes (kubectl, kustomize or helm) for a hassle-free experience with support for both
:ollama
and:cuda
tagged images. -
π€ Ollama/OpenAI API Integration: Effortlessly integrate OpenAI-compatible APIs for versatile conversations alongside Ollama models. Customize the OpenAI API URL to link with LMStudio, GroqCloud, Mistral, OpenRouter, and more.
-
𧩠Pipelines, Open WebUI Plugin Support: Seamlessly integrate custom logic and Python libraries into Open WebUI using Pipelines Plugin Framework. Launch your Pipelines instance, set the OpenAI URL to the Pipelines URL, and explore endless possibilities. Examples include Function Calling, User Rate Limiting to control access, Usage Monitoring with tools like Langfuse, Live Translation with LibreTranslate for multilingual support, Toxic Message Filtering and much more.
-
π± Responsive Design: Enjoy a seamless experience across Desktop PC, Laptop, and Mobile devices.
-
π± Progressive Web App (PWA) for Mobile: Enjoy a native app-like experience on your mobile device with our PWA, providing offline access on localhost and a seamless user interface.
-
βοΈπ’ Full Markdown and LaTeX Support: Elevate your LLM experience with comprehensive Markdown and LaTeX capabilities for enriched interaction.
-
π€πΉ Hands-Free Voice/Video Call: Experience seamless communication with integrated hands-free voice and video call features, allowing for a more dynamic and interactive chat environment.
-
π οΈ Model Builder: Easily create Ollama models via the Web UI. Create and add custom characters/agents, customize chat elements, and import models effortlessly through Open WebUI Community integration.
-
π Native Python Function Calling Tool: Enhance your LLMs with built-in code editor support in the tools workspace. Bring Your Own Function (BYOF) by simply adding your pure Python functions, enabling seamless integration with LLMs.
-
π Local RAG Integration: Dive into the future of chat interactions with groundbreaking Retrieval Augmented Generation (RAG) support. This feature seamlessly integrates document interactions into your chat experience. You can load documents directly into the chat or add files to your document library, effortlessly accessing them using the
#
command before a query. -
π Web Search for RAG: Perform web searches using providers like
SearXNG
,Google PSE
,Brave Search
,serpstack
,serper
,Serply
,DuckDuckGo
andTavilySearch
and inject the results directly into your chat experience. -
π Web Browsing Capability: Seamlessly integrate websites into your chat experience using the
#
command followed by a URL. This feature allows you to incorporate web content directly into your conversations, enhancing the richness and depth of your interactions. -
π¨ Image Generation Integration: Seamlessly incorporate image generation capabilities using options such as AUTOMATIC1111 API or ComfyUI (local), and OpenAI's DALL-E (external), enriching your chat experience with dynamic visual content.
-
βοΈ Many Models Conversations: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.
-
π Role-Based Access Control (RBAC): Ensure secure access with restricted permissions; only authorized individuals can access your Ollama, and exclusive model creation/pulling rights are reserved for administrators.
-
ππ Multilingual Support: Experience Open WebUI in your preferred language with our internationalization (i18n) support. Join us in expanding our supported languages! We're actively seeking contributors!
-
π Continuous Updates: We are committed to improving Open WebUI with regular updates, fixes, and new features.
Want to learn more about Open WebUI's features? Check out our Open WebUI documentation for a comprehensive overview!
Don't forget to explore our sibling project, Open WebUI Community, where you can discover, download, and explore customized Modelfiles. Open WebUI Community offers a wide range of exciting possibilities for enhancing your chat interactions with Open WebUI! π
Note
Please note that for certain Docker environments, additional configurations might be needed. If you encounter any connection issues, our detailed guide on Open WebUI Documentation is ready to assist you.
Warning
When using Docker to install Open WebUI, make sure to include the -v open-webui:/app/backend/data
in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.
Tip
If you wish to utilize Open WebUI with Ollama included or CUDA acceleration, we recommend utilizing our official images tagged with either :cuda
or :ollama
. To enable CUDA, you must install the Nvidia CUDA container toolkit on your Linux/WSL system.
-
If Ollama is on your computer, use this command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
-
If Ollama is on a Different Server, use this command:
To connect to Ollama on another server, change the
OLLAMA_BASE_URL
to the server's URL:docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
- To run Open WebUI with Nvidia GPU support, use this command:
docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
-
If you're only using OpenAI API, use this command:
docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup:
-
With GPU Support: Utilize GPU resources by running the following command:
docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
-
For CPU Only: If you're not using a GPU, use this command instead:
docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly.
After installation, you can access Open WebUI at http://localhost:3000. Enjoy! π
We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our Open WebUI Documentation or join our Discord community for comprehensive guidance.
Encountering connection issues? Our Open WebUI Documentation has got you covered. For further assistance and to join our vibrant community, visit the Open WebUI Discord.
If you're experiencing connection issues, itβs often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the --network=host
flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: http://localhost:8080
.
Example Docker Command:
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
In case you want to update your local Docker installation to the latest version, you can do it with Watchtower:
docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui
In the last part of the command, replace open-webui
with your container name if it is different.
Check our Migration Guide available in our Open WebUI Documentation.
Warning
The :dev
branch contains the latest unstable features and changes. Use it at your own risk as it may have bugs or incomplete features.
If you want to try out the latest bleeding-edge features and are okay with occasional instability, you can use the :dev
tag like this:
docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui --add-host=host.docker.internal:host-gateway --restart always ghcr.io/open-webui/open-webui:dev
Discover upcoming features on our roadmap in the Open WebUI Documentation.
A big shoutout to our amazing supporters who's helping to make this project possible! π
- We're looking for Sponsors!
Special thanks to Prof. Lawrence Kim and Prof. Nick Vincent for their invaluable support and guidance in shaping this project into a research endeavor. Grateful for your mentorship throughout the journey! π
This project is licensed under the MIT License - see the LICENSE file for details. π
If you have any questions, suggestions, or need assistance, please open an issue or join our Open WebUI Discord community to connect with us! π€
Created by Timothy J. Baek - Let's make Open WebUI even more amazing together!