Releases: superlinear-ai/raglite
Releases · superlinear-ai/raglite
v0.5.0
What's Changed
- style: reduce httpx log level by @lsorber in #59
- feat: let LLM choose whether to retrieve context by @lsorber in #62
- fix: support pgvector v0.7.0+ by @undo76 in #63
- docs: add GitHub star history to README by @MattiaMolon in #65
- feat: add MCP server by @lsorber in #67
New Contributors
- @MattiaMolon made their first contribution in #65
Full Changelog: v0.4.1...v0.5.0
v0.4.1
v0.4.0
What's Changed
- feat: improve late chunking and optimize pgvector settings by @lsorber in #51
- Add a workaround for #24 to increase the embedder's context size from 512 to a user-definable size.
- Increase the default embedder context size to 1024 tokens (more degrades bge-m3's performance).
- Upgrade llama-cpp-python to the latest version.
- More robust testing of rerankers with Kendall's rank correlation coefficient.
- Optimise pgvector's settings.
- Offer better control of oversampling in hybrid and vector search.
- Upgrade to the PostgreSQL 17.
Full Changelog: v0.3.0...v0.4.0
v0.3.0
v0.2.1
What's Changed
- fix: add fallbacks for model info by @undo76 in #44
- fix: improve unpacking of keyword search results by @lsorber in #46
- fix: upgrade rerankers and remove flashrank patch by @lsorber in #47
- fix: improve structured output extraction and query adapter updates by @emilradix in #34
New Contributors
- @undo76 made their first contribution in #44
- @emilradix made their first contribution in #34
Full Changelog: v0.2.0...v0.2.1