Skip to content

Releases: logancyang/obsidian-copilot

2.4.9

12 Jan 16:50
195c7de
Compare
Choose a tag to compare
  • Add OpenRouterAI as a separate option in model dropdown. You can specify the actual model in the setting. OpenRouter serves free and uncensored LLMs! Visit their site to check the models available https://openrouter.ai/
SCR-20240112-ifwi SCR-20240112-igae
  • Bumped max tokens to 10000, and max conversation turns to 30

2.4.8

11 Jan 06:21
d2d16cd
Compare
Choose a tag to compare
  • Add LM Studio and Ollama as two separate options in the model dropdown
  • Add setup guide
  • Remove LocalAI option

2.4.7

08 Jan 03:35
c74737b
Compare
Choose a tag to compare
  • Add google api key in settings
Screenshot 2024-01-07 at 7 22 34 PM
  • Add Gemini Pro model
    • I find that this model hallucinates quite a lot if you have a high temperature. Set the temperature close to 0 for better results.
      • Temperature 0.7:
        Screenshot 2024-01-07 at 7 19 27 PM
        Screenshot 2024-01-07 at 7 19 38 PM

      • Temperature 0.1:
        Screenshot 2024-01-07 at 7 23 17 PM

2.4.6

02 Jan 06:16
b90a165
Compare
Choose a tag to compare
  • Add Save and Reload button to avoid manually toggling the plugin on and off every time settings change. Now, clicking on either button triggers a plugin reload to let the new settings take effect
Screenshot 2024-01-01 at 9 59 50 PM
  • Fix error handling
    • No more "model_not_found" when the user has no access to the model, now it explicitly says you have no access
    • Shows the missing API key message when the chat model is not properly initialized
    • Shows model switch failure when Azure credentials are not provided
  • Show the actual model name and chain type used in debug messages
  • Make gpt-4-turbo the default model

2.4.5

29 Dec 23:55
773a379
Compare
Choose a tag to compare
  • Upgraded langchainJS to v0.0.212
  • Fix bugs and UX issues
    • IME for east Asian languages now does not send on Enter
    • OpenAI proxy base URL also overrides for the embedding model #211
    • Clearing vector store should not affect new instance creation

2.4.4

08 Nov 04:07
f3db894
Compare
Choose a tag to compare
  • Add the new shiny GPT-4 TURBO model that has 128K context length! (I noticed that this new model is now very fast and the older ones including GPT-3 are becoming slower. Not sure if it's just me. Let me know if this happens to you too!)

2.4.3

14 Aug 23:20
effd534
Compare
Choose a tag to compare
  • Add default folder for saved conversations
Screenshot 2023-08-14 at 4 21 13 PM

2.4.2

10 Aug 23:57
287f5d3
Compare
Choose a tag to compare
  • Implement cross-session local vector store using PouchDB
  • Add a command to clear the local vector store
Screenshot 2023-08-10 at 4 57 06 PM
  • Add TTL setting and doc removal at plugin load time
Screenshot 2023-08-10 at 4 56 15 PM

2.4.1

08 Aug 07:46
106c5db
Compare
Choose a tag to compare
  • Thanks to @Sokole1's contribution, Local Copilot does not need a proxy server and can just use the OpenAI Proxy Base URL setting. Pls check the updated setup guide!

2.4.0

02 Aug 07:38
1f945cf
Compare
Choose a tag to compare
  • Add proxy server for LocalAI
  • Implement local model access
  • Add LocalAI as an embedding provider
  • Add a step-by-step guide for LocalAI setup for Apple Silicon and Windows WSL
  • Created youtube demo video for v2.4.0