From 2606de0f6234aabcd559bbf81827a3a1e8717403 Mon Sep 17 00:00:00 2001 From: Lachlan Lindsay Date: Tue, 2 Jan 2024 15:51:17 -0800 Subject: [PATCH] updates question-answer url --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 48a3834c4..000dbf4ab 100644 --- a/README.md +++ b/README.md @@ -70,7 +70,7 @@ The app can be used in two ways: The left panel of the app (shown in red in the above image) has several user-configurable parameters. -`Number of eval questions` - This is the number of question-answer pairs to auto-generate for the given inputs documents. As mentioned above, question-answer pair auto-generation will use Langchain's `QAGenerationChain` with prompt specified [here](https://github.com/hwchase17/langchain/blob/master/langchain/chains/qa_generation/prompt.py). +`Number of eval questions` - This is the number of question-answer pairs to auto-generate for the given inputs documents. As mentioned above, question-answer pair auto-generation will use Langchain's `QAGenerationChain` with prompt specified [here](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/qa_generation/prompt.py). `Chunk size` - Number of characters per chunk when the input documents are split. This [can impact answer quality](https://lancemartin.notion.site/lancemartin/Q-A-assistant-limitations-f576bf55b61c44e0970330ac3883315e). Retrievers often use text embedding similarity to select chunks related to the question. If the chunks are too large, each chunk may contain more information unrelated to the question, which may degrade the summarized answer quality. If chunks are too small, important context may be left out of the retrieved chunks.