From 3df6195cb0a6b15a38f2c9b450e749b266089143 Mon Sep 17 00:00:00 2001 From: "Jin, Qiao" <89779290+JinBridger@users.noreply.github.com> Date: Thu, 31 Oct 2024 16:57:35 +0800 Subject: [PATCH] Fix application quickstart (#12305) * fix graphrag quickstart * fix axolotl quickstart * fix ragflow quickstart * fix ragflow quickstart * fix graphrag toc * fix comments * fix comment * fix comments --- docs/mddocs/Quickstart/axolotl_quickstart.md | 4 +-- docs/mddocs/Quickstart/graphrag_quickstart.md | 3 +- docs/mddocs/Quickstart/ragflow_quickstart.md | 32 ++++++++++++++----- .../axolotl/requirements-xpu.txt | 2 +- 4 files changed, 29 insertions(+), 12 deletions(-) diff --git a/docs/mddocs/Quickstart/axolotl_quickstart.md b/docs/mddocs/Quickstart/axolotl_quickstart.md index e50b9f8e874..ccbfd09ae14 100644 --- a/docs/mddocs/Quickstart/axolotl_quickstart.md +++ b/docs/mddocs/Quickstart/axolotl_quickstart.md @@ -45,10 +45,10 @@ Install [axolotl v0.4.0](https://github.com/OpenAccess-AI-Collective/axolotl/tre ```bash # install axolotl v0.4.0 -git clone https://github.com/OpenAccess-AI-Collective/axolotl/tree/v0.4.0 +git clone https://github.com/OpenAccess-AI-Collective/axolotl -b v0.4.0 cd axolotl # replace requirements.txt -remove requirements.txt +rm requirements.txt wget -O requirements.txt https://raw.githubusercontent.com/intel-analytics/ipex-llm/main/python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt pip install -e . pip install transformers==4.36.0 diff --git a/docs/mddocs/Quickstart/graphrag_quickstart.md b/docs/mddocs/Quickstart/graphrag_quickstart.md index 52e08ffe060..bd45db85f4c 100644 --- a/docs/mddocs/Quickstart/graphrag_quickstart.md +++ b/docs/mddocs/Quickstart/graphrag_quickstart.md @@ -18,7 +18,8 @@ The [GraphRAG project](https://github.com/microsoft/graphrag) is designed to lev Follow the steps in [Run Ollama with IPEX-LLM on Intel GPU Guide](./ollama_quickstart.md) to install `ipex-llm[cpp]==2.1.0` and run Ollama on Intel GPU. Ensure that `ollama serve` is running correctly and can be accessed through a local URL (e.g., `https://127.0.0.1:11434`). -**Please note that for GraphRAG, we highly recommand using the stable version of ipex-llm through `pip install ipex-llm[cpp]==2.1.0`**. +> [!NOTE] +> Please note that for GraphRAG, we highly recommand using the stable version of ipex-llm through `pip install ipex-llm[cpp]==2.1.0`. ### 2. Prepare LLM and Embedding Model diff --git a/docs/mddocs/Quickstart/ragflow_quickstart.md b/docs/mddocs/Quickstart/ragflow_quickstart.md index a41cb66386f..7ef546c9515 100644 --- a/docs/mddocs/Quickstart/ragflow_quickstart.md +++ b/docs/mddocs/Quickstart/ragflow_quickstart.md @@ -21,6 +21,7 @@ - [Pull Model](./ragflow_quickstart.md#2-pull-model) - [Start `RAGFlow` Service](./ragflow_quickstart.md#3-start-ragflow-service) - [Using `RAGFlow`](./ragflow_quickstart.md#4-using-ragflow) +- [Troubleshooting](./ragflow_quickstart.md#5-troubleshooting) ## Quickstart @@ -71,7 +72,7 @@ Now we need to pull a model for RAG using Ollama. Here we use [Qwen/Qwen2-7B](ht You can either clone the repository or download the source zip from [github](https://github.com/infiniflow/ragflow/archive/refs/heads/main.zip): ```bash -$ git clone https://github.com/infiniflow/ragflow.git +git clone https://github.com/infiniflow/ragflow.git ``` #### 3.2 Environment Settings @@ -79,7 +80,7 @@ $ git clone https://github.com/infiniflow/ragflow.git Ensure `vm.max_map_count` is set to at least 262144. To check the current value of `vm.max_map_count`, use: ```bash -$ sysctl vm.max_map_count +sysctl vm.max_map_count ``` ##### Changing `vm.max_map_count` @@ -87,7 +88,7 @@ $ sysctl vm.max_map_count To set the value temporarily, use: ```bash -$ sudo sysctl -w vm.max_map_count=262144 +sudo sysctl -w vm.max_map_count=262144 ``` To make the change permanent and ensure it persists after a reboot, add or update the following line in `/etc/sysctl.conf`: @@ -104,10 +105,10 @@ Build the pre-built Docker images and start up the server: > Running the following commands automatically downloads the *dev* version RAGFlow Docker image. To download and run a specified Docker version, update `RAGFLOW_VERSION` in **docker/.env** to the intended version, for example `RAGFLOW_VERSION=v0.7.0`, before running the following commands. ```bash -$ export no_proxy=localhost,127.0.0.1 -$ cd ragflow/docker -$ chmod +x ./entrypoint.sh -$ docker compose up -d +export no_proxy=localhost,127.0.0.1 +cd ragflow/docker +chmod +x ./entrypoint.sh +docker compose up -d ``` > [!NOTE] @@ -116,7 +117,7 @@ $ docker compose up -d Check the server status after having the server up and running: ```bash -$ docker logs -f ragflow-server +docker logs -f ragflow-server ``` Upon successful deployment, you will see logs in the terminal similar to the following: @@ -237,3 +238,18 @@ Input your questions into the **Message Resume Assistant** textbox at the bottom #### Exit To shut down the RAGFlow server, use **Ctrl+C** in the terminal where the Ragflow server is runing, then close your browser tab. + +### 5. Troubleshooting + +#### Stuck when parsing files `Node has failed for xx times in a row, putting on 30 second timeout` + +This is because there's no enough space on the disk and the Docker container stop working. Please left enough space on the disk and make sure the disk usage is below 90%. + +#### `Max retries exceeded with url: /encodings/cl100k_base.tiktoken` while starting the RAGFlow service through Docker + +This may caused by network problem. To resolve this, you could try to: + +1. Attach to the Docker container by `docker exec -it ragflow-server /bin/bash` +2. Set environment variables like `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY` at the beginning of the `/ragflow/entrypoint.sh`. +3. Stop the service by `docker compose stop`. +4. Restart the service by `docker compose start`. diff --git a/python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt b/python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt index 070bf7e0f4c..7001e7b5499 100644 --- a/python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt +++ b/python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt @@ -40,4 +40,4 @@ s3fs gcsfs # adlfs -trl>=0.7.9 +trl>=0.7.9, <=0.9.6