From 4a837803bebb64768ee7a0ec48682d82d2fa52cf Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tuana=20=C3=87elik?= Date: Tue, 13 Aug 2024 15:27:43 +0200 Subject: [PATCH] Update index.md --- content/blog/query-expansion/index.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/blog/query-expansion/index.md b/content/blog/query-expansion/index.md index 512b2316..8523e4c6 100644 --- a/content/blog/query-expansion/index.md +++ b/content/blog/query-expansion/index.md @@ -5,8 +5,8 @@ description: Expand keyword queries to improve recall and provide more context t featured_image: thumbnail.png images: ["blog/query-expansion/thumbnail.png"] toc: True -date: 2024-08-12 -last_updated: 2024-08-12 +date: 2024-08-14 +last_updated: 2024-08-14 authors: - Tuana Celik tags: ["Retrieval", "RAG", "Advanced Use Cases"] @@ -223,4 +223,4 @@ Query Expansion is a great technique that will allow you to get a wider range of This does however mean that we heavily rely on the quality of the provided query. Query expansion allows you to navigate this issue by generating similar queries to the user query. -In my opinion, one of the main advantages of this technique is that it allows you to avoid embedding documentation at each update, while still managing to increase the relevance of retrieved documents at query time. Keyword retrieval doesn’t require any extra embedding step, so the only inferencing happening at retrieval time in this scenario is when we ask an LLM to generate a certain number of similar queries. \ No newline at end of file +In my opinion, one of the main advantages of this technique is that it allows you to avoid embedding documentation at each update, while still managing to increase the relevance of retrieved documents at query time. Keyword retrieval doesn’t require any extra embedding step, so the only inferencing happening at retrieval time in this scenario is when we ask an LLM to generate a certain number of similar queries.