diff --git a/README.md b/README.md
index e6cb973bd5..4e3a9e16d2 100644
--- a/README.md
+++ b/README.md
@@ -1,9 +1,9 @@
-***The WebUI is ported from [Text-Generation-WebUI](https://github.com/oobabooga/text-generation-webui) for running local LLM on Intel GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) using [IPEX-LLM](https://github.com/intel-analytics/bigdl).***
+***The WebUI is ported from [Text-Generation-WebUI](https://github.com/oobabooga/text-generation-webui) for running local LLM on Intel GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) using [IPEX-LLM](https://github.com/intel-analytics/ipex-llm).***
## Quick Start
-To get started, please see the step-by-step [quickstart](https://bigdl.readthedocs.io/en/latest/doc/LLM/Quickstart/webui_quickstart.html).
+To get started, please see the step-by-step [quickstart](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/webui_quickstart.html).
-[](https://bigdl.readthedocs.io/en/latest/doc/LLM/Quickstart/webui_quickstart.html)
+[](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/webui_quickstart.html)
## User Guide
For more information, see the user guide below.
@@ -13,7 +13,7 @@ For more information, see the user guide below.
Before starting all the steps, you need to download and unzip the text-generation-webui based on `IPEX-LLM` optimizations.
```bash
-https://github.com/intel-analytics/text-generation-webui/archive/refs/heads/IPEX-LLM.zip
+https://github.com/intel-analytics/text-generation-webui/archive/refs/heads/ipex-llm.zip
```
### 2. Prepare the Environment on Windows
@@ -22,7 +22,7 @@ Please use a python environment management tool (we recommend using Conda) to cr
#### 2.1 Install IPEX-LLM
-Please see [IPEX-LLM Installation on Windows](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#windows) for more details to install IPEX-LLM on your Client.
+Please see [IPEX-LLM Installation on Windows](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#windows) for more details to install IPEX-LLM on your Client.
#### 2.2 Install Other Required Dependencies
@@ -67,14 +67,14 @@ This share link expires in 72 hours. For free permanent hosting and GPU upgrades
##### 4.1.1 Download the Model
If you need to download a model, enter the Hugging Face username or model path, for instance: `Qwen/Qwen-7B-Chat`.
-![Image text](https://github.com/intel-analytics/text-generation-webui/blob/8ebee0651dd56012c4a9e0ba6932efec4c7d1b2e/readme_folder/image.png)
+![Image text](./readme_folder/image.png)
##### 4.1.2 Place the Model
After you have downloaded the model (or if you already have the model locally), please place the model in `Text-Generation-WebUI/models` directory.
After completing the two steps above, you may click the `Model` button to select your model.
-![Image text](https://github.com/intel-analytics/text-generation-webui/blob/8ebee0651dd56012c4a9e0ba6932efec4c7d1b2e/readme_folder/image1.png)
+![Image text](./readme_folder/image1.png)
#### 4.2 Enable IPEX-LLM Optimizations
@@ -83,7 +83,7 @@ Text-Generation-WebUI supports multiple backends, including `IPEX-LLM`, `Transfo
Then please select the device according to your device (the default device is `GPU`).
-![Image text](https://github.com/intel-analytics/text-generation-webui/blob/8ebee0651dd56012c4a9e0ba6932efec4c7d1b2e/readme_folder/image2.png)
+![Image text](./readme_folder/image2.png)
#### 4.3 Load Model in Low Precision
@@ -99,9 +99,9 @@ Notes:
- Please select the `optimize-model` and `use_cache` options to accelerate the model.
-Now you may click the `Load` button to load the model with IPEX-LLM optimizations.
+Now you may click the `Load` button to load the model with IPEX-LLM optimizations. If everything goes well, you will get a message as shown below.
-![Image text](https://github.com/intel-analytics/text-generation-webui/blob/8ebee0651dd56012c4a9e0ba6932efec4c7d1b2e/readme_folder/image3.png)
+![Image text](./readme_folder/image3.png)
##### 4.4 Run the Model on WebUI
@@ -128,7 +128,7 @@ Notes:
* Please see [Chat-Tab Wiki](https://github.com/oobabooga/text-generation-webui/wiki/01-%E2%80%90-Chat-Tab) for more details.
-![Image text](https://github.com/intel-analytics/text-generation-webui/blob/8ebee0651dd56012c4a9e0ba6932efec4c7d1b2e/readme_folder/image4.png)
+![Image text](./readme_folder/image4.png)
##### 4.4.2 Default Tab
@@ -138,7 +138,7 @@ This tab contains two main text boxes: Input, where you enter your prompt, and O
Please see [Default-Tab Wiki](https://github.com/oobabooga/text-generation-webui/wiki/02-%E2%80%90-Default-and-Notebook-Tabs#default-tab) for more details.
-![Image text](https://github.com/intel-analytics/text-generation-webui/blob/8ebee0651dd56012c4a9e0ba6932efec4c7d1b2e/readme_folder/image5.png)
+![Image text](./readme_folder/image5.png)
##### 4.4.3 Notebook Tab
@@ -147,4 +147,4 @@ You may use the `Notebook tab` to do exactly what the `Default tab` does, with t
Please see [Notebook-Tab Wiki](https://github.com/oobabooga/text-generation-webui/wiki/02-%E2%80%90-Default-and-Notebook-Tabs#notebook-tab) for more details.
-![Image text](https://github.com/intel-analytics/text-generation-webui/blob/8ebee0651dd56012c4a9e0ba6932efec4c7d1b2e/readme_folder/image6.png)
+![Image text](./readme_folder/image6.png)
diff --git a/readme_folder/image.png b/readme_folder/image.png
index b71459cb20..6147726809 100644
Binary files a/readme_folder/image.png and b/readme_folder/image.png differ
diff --git a/readme_folder/image1.png b/readme_folder/image1.png
index 809ef7182d..a3f1bafa96 100644
Binary files a/readme_folder/image1.png and b/readme_folder/image1.png differ
diff --git a/readme_folder/image2.png b/readme_folder/image2.png
index cb6b8cf3fa..70dc86136a 100644
Binary files a/readme_folder/image2.png and b/readme_folder/image2.png differ
diff --git a/readme_folder/image3.png b/readme_folder/image3.png
index 61947a1edb..3f9c763e15 100644
Binary files a/readme_folder/image3.png and b/readme_folder/image3.png differ