Releases: logancyang/obsidian-copilot
Releases · logancyang/obsidian-copilot
2.4.9
- Add OpenRouterAI as a separate option in model dropdown. You can specify the actual model in the setting. OpenRouter serves free and uncensored LLMs! Visit their site to check the models available https://openrouter.ai/


- Bumped max tokens to 10000, and max conversation turns to 30
2.4.8
- Add LM Studio and Ollama as two separate options in the model dropdown
- Add setup guide
- Remove LocalAI option
2.4.7
2.4.6
- Add Save and Reload button to avoid manually toggling the plugin on and off every time settings change. Now, clicking on either button triggers a plugin reload to let the new settings take effect

- Fix error handling
- No more "model_not_found" when the user has no access to the model, now it explicitly says you have no access
- Shows the missing API key message when the chat model is not properly initialized
- Shows model switch failure when Azure credentials are not provided
- Show the actual model name and chain type used in debug messages
- Make
gpt-4-turbo
the default model
2.4.5
2.4.4
- Add the new shiny GPT-4 TURBO model that has 128K context length! (I noticed that this new model is now very fast and the older ones including GPT-3 are becoming slower. Not sure if it's just me. Let me know if this happens to you too!)
2.4.3
2.4.2
2.4.1
- Thanks to @Sokole1's contribution, Local Copilot does not need a proxy server and can just use the OpenAI Proxy Base URL setting. Pls check the updated setup guide!
2.4.0
- Add proxy server for LocalAI
- Implement local model access
- Add LocalAI as an embedding provider
- Add a step-by-step guide for LocalAI setup for Apple Silicon and Windows WSL
- Created youtube demo video for v2.4.0