Releases: pfrankov/obsidian-local-gpt
Releases · pfrankov/obsidian-local-gpt
1.14.7
Fixed mime types for images, removed context optimizations for Ollama provider when it's an image in the selected text.
1.14.6
Fixed issue with parsing OpenRouter's requests.
1.14.5
Increased threshold when in next call context size can be deoptimized in order to reduce VRAM consumption. Closes #54
1.14.4
Changed default URL for OpenAI like providers to v1/
Added necessary headers to embeddings request
1.14.3
Fixed delay between first load and showing settings tab. Closes #40
1.14.2
Changed heuristic for setting the context length of the requests introduced in 1.12.0. Previously it could cause problems with requests larger than 2048 tokens.
1.14.1
Fixed PDF caching. It didn't work in 1.14.0 if you updated from 1.13+
Increased Context cap from 7000 to 10000 characters for very long requests.
1.14.0
Added nice ✨Enhancing loader for embedding process
Added PDF caching. No more waiting of parsing PDFs.
Limited Context to 7000 characters. It should be enough for anyone. This will provide more precise responses for Enhanced Actions.
1.13.1
Added first portion of tests.
Fixed network requests on mobile. Closes #35
1.13.0
🎉 PDF support for Enhanced Actions
Works only with text-based PDFs. No OCR.
Persistent storage for Enhanced Actions cache
So it persist even after restart of Obsidian.
This significantly speeds up work with documents that have already been used for EA and have not changed.
Check out what the first and second calls of the same 8 nested documents (39 chunks) look like:
Note: after changing the model for embedding, the caches are reset.