Skip to content

Releases: browser-use/web-ui

🚀 Local DeepSeek-r1 Power with Ollama!

28 Jan 12:52
0c9cb9b
Compare
Choose a tag to compare

Hey everyone,

We've just rolled out a new release packed with awesome updates:

  1. Browser-Use Upgrade: We're now fully compatible with the latest browser-use version 0.1.29! 🎉
  2. Local Ollama Integration: Get ready for completely local and private AI with support for the incredible deepseek-r1 model via Ollama! 🏠

Before You Dive In:

  • Update Code: Don't forget to git pull to grab the latest code changes.
  • Reinstall Dependencies: Run pip install -r requirements.txt to ensure all your dependencies are up to date.

Important Notes on deepseek-r1:

  • Model Size Matters: We've found that deepseek-r1:14b and larger models work exceptionally well! Smaller models may not provide the best experience, so we recommend sticking with the larger options. 🤔

How to Get Started with Ollama and deepseek-r1:

  1. Install Ollama: Head over to ollama and download/install Ollama on your system. 💻
  2. Run deepseek-r1: Open your terminal and run the command: ollama run deepseek-r1:14b (or a larger model if you prefer).
  3. WebUI Setup: Launch the WebUI following the instructions. Here's a crucial step: Uncheck "Use Vision" and set "Max Actions per Step" to 1. ✅
  4. Enjoy! You're now all set to experience the power of local deepseek-r1. Have fun! 🥳

Happy Chinese New Year! 🏮

✨ DeepSeek-r1 + Browser-use = New Magic ✨

25 Jan 16:13
5bc4978
Compare
Choose a tag to compare

🚀 Exciting news! Your browser-use can now engage in deep thinking!

Notes:

  1. The current version is a preview version for DeepSeek-r1 under development, please keep updating code to use.
  2. The current version only support the official DeepSeek-r1 api to use.

How to Use:

  1. 🔑 Configure API Key: Make sure you have set the correct DEEPSEEK_API_KEY in your .env file.

  2. 🌐 Launch WebUI: Launch the WebUI as instructed in the README.

  3. 👀 Disable Vision: In Agent Settings, uncheck "Use_Vision".

  4. 🤖 Select Model: In LLM Provider, select "deepseek", and in Model Name, select "deepseek-reasoner".

  5. 🎉 Enjoy!

Hotfix some errors

16 Jan 01:52
2654e6b
Compare
Choose a tag to compare
  1. Upgrade browser-use==0.1.19 to solve Font OS error on Windows.
  2. Fix parsing result error in stream feature(Headless=True), supported return agent history file.
  3. Fix status of Stop button in stream feature.

Please update latest codes and pip install -r requirements.txt

New WebUI: Enhanced Features and Compatibility

13 Jan 15:28
be89b90
Compare
Choose a tag to compare
  1. A brand-new WebUI interface with added features like video display.
  2. Adapted for the latest version of browser-use, with native support for models like Ollama, Gemini, and DeepSeek. Please update your code and run pip install -r requirements.txt.
  3. Ability to stop agent tasks at any time.
  4. Real-time page display in the WebUI when headless=True.
  5. Improved custom browser usage, fixing a bug about using own browser on Mac.
  6. Support for Docker environment installation.

Original version

06 Jan 14:32
e481813
Compare
Choose a tag to compare
  1. A Brand New WebUI: We offer a comprehensive web interface that supports a wide range of browser-use functionalities. This UI is designed to be user-friendly and enables easy interaction with the browser agent.

  2. Expanded LLM Support: We've integrated support for various Large Language Models (LLMs), including: Gemini, OpenAI, Azure OpenAI, Anthropic, DeepSeek, Ollama etc. And we plan to add support for even more models in the future.

  3. Custom Browser Support: You can use your own browser with our tool, eliminating the need to re-login to sites or deal with other authentication challenges. This feature also supports high-definition screen recording.

  4. Customized Agent: We've implemented a custom agent that enhances browser-use with Optimized prompts.