Skip to content

Releases: pfrankov/obsidian-local-gpt

1.12.0

29 Sep 20:30
Compare
Choose a tag to compare

Migrated providers from fetch to remote.net.request. Closes #26
Avoiding CORS issues and improving performance.

Refactor AI provider and embedding functionality, add optimize model reloading
By default, the Ollama API has 2048 tokens limit even for the largest models. So there are some heuristics to provide full context window if needed as well as to optimize the VRAM consumption.

Added cache invalidation after changing an embedding model
Before the change, the cache was not invalidated even if the embedding model was changed. That's critical because the embeddings are not interchangeable between models.

Added prompt templating for context and selection

Context information is below.
{{=CONTEXT_START=}}
---------------------
{{=CONTEXT=}}
{{=CONTEXT_END=}}
---------------------
Given the context information and not prior knowledge, answer the query.
Query: {{=SELECTION=}}
Answer:

More about Prompt templating in prompt-templating.md

1.11.0

22 Sep 21:23
Compare
Choose a tag to compare

Fixed an issue where the plugin was using the current document for Enhanced Actions. It is now ignored in any case.

Fixed an issue with positioning of text stream and its final display - they used to be different.

Added highlighting of new text in the stream:
Kapture 2024-09-23 at 00 20 52

1.10.0

16 Sep 00:10
Compare
Choose a tag to compare

🎉 Implemented Enhanced Actions

Or ability to use the context from links and backlinks or just RAG (Retrieval Augmented Generation).
1726439231929851

The idea is to enhance your actions with relevant context. And not from your entire vault but only with the related docs. It perfectly utilises the Obsidian's philosophy of linked documents.

Now you can create richer articles while writing, more in-depth summaries on the whole topic, you can ask your documents, translate texts without losing context, recap work meetings, conduct brainstorming sessions on a given topic...
Share your applications of the Enhanced Actions in the Discussion.

Setup

1. You need to install embedding model for Ollama:

  • For English: ollama pull nomic-embed-text (fastest)
  • For other languages: ollama pull bge-m3 (slower, but more accurate)

Or just use text-embedding-3-large for OpenAI.

2. Select Embedding model in plugin's settings

And try to use the largest Default model with largest context window.
image

3. Select some text and run any action on it

No additional actions required. No indication for now but you can check the quality of the results.
image

1.9.0

07 Sep 21:21
Compare
Choose a tag to compare

Changed default model to Gemma 2: 9B

Added a New System Prompt action for creating actions tailored to user needs.
image
image
image

In Settings added two lines limit for Prompt and also System Prompt. Closes #27
image

1.8.1

03 Aug 20:49
Compare
Choose a tag to compare

Added optional {{=SELECTION=}} keyword for the prompt. Closes #16, #20
image

Added ability to assign the hotkeys to actions. Closes #23
image

1.8.0

16 Jun 19:07
Compare
Choose a tag to compare

Added Creativity (temperature) dropdown
Changed default model to Qwen2

1.7.0

18 Jan 22:00
Compare
Choose a tag to compare

Added support multimodal models like Llava. So now it's possible to also ask your images.

No speedup. MacBook Pro 13, M1, 16GB, Ollama, bakllava.

1.6.3

16 Jan 09:25
Compare
Choose a tag to compare

Added [DONE] handler for OpenAI-like providers, closes #9

1.6.2

13 Jan 00:21
Compare
Choose a tag to compare

Enabled mobile installation

1.6.1

12 Jan 23:54
Compare
Choose a tag to compare

Better settings separation