diff --git a/docs/readthedocs/source/_templates/sidebar_quicklinks.html b/docs/readthedocs/source/_templates/sidebar_quicklinks.html index 6d7015568e1..5da4603dd4f 100644 --- a/docs/readthedocs/source/_templates/sidebar_quicklinks.html +++ b/docs/readthedocs/source/_templates/sidebar_quicklinks.html @@ -3,111 +3,40 @@
diff --git a/docs/readthedocs/source/_toc.yml b/docs/readthedocs/source/_toc.yml index 6b12ef853c6..89a29102f42 100644 --- a/docs/readthedocs/source/_toc.yml +++ b/docs/readthedocs/source/_toc.yml @@ -34,6 +34,12 @@ subtrees: title: "CPU" - file: doc/LLM/Overview/install_gpu title: "GPU" + - file: doc/LLM/Quickstart/index + title: "Quickstart" + subtrees: + - entries: + - file: doc/LLM/Quickstart/install_windows_gpu + - file: doc/LLM/Quickstart/webui_quickstart - file: doc/LLM/Overview/KeyFeatures/index title: "Key Features" subtrees: @@ -64,14 +70,8 @@ subtrees: # title: "Tips and Known Issues" - file: doc/PythonAPI/LLM/index title: "API Reference" - - file: doc/LLM/Overview/FAQ/index + - file: doc/LLM/Overview/FAQ/faq title: "FAQ" - subtrees: - - entries: - - file: doc/LLM/Overview/FAQ/general_info - title: "General Info & Concepts" - - file: doc/LLM/Overview/FAQ/resolve_error - title: "How to Resolve Errors" - entries: - file: doc/Orca/index diff --git a/docs/readthedocs/source/conf.py b/docs/readthedocs/source/conf.py index ec5f4e85b7b..76bac4d1a05 100644 --- a/docs/readthedocs/source/conf.py +++ b/docs/readthedocs/source/conf.py @@ -37,7 +37,7 @@ # -- Project information ----------------------------------------------------- html_theme = "pydata_sphinx_theme" html_theme_options = { - "header_links_before_dropdown": 9, + "header_links_before_dropdown": 3, "icon_links": [ { "name": "GitHub Repository for BigDL", diff --git a/docs/readthedocs/source/doc/LLM/Overview/FAQ/resolve_error.md b/docs/readthedocs/source/doc/LLM/Overview/FAQ/faq.md similarity index 86% rename from docs/readthedocs/source/doc/LLM/Overview/FAQ/resolve_error.md rename to docs/readthedocs/source/doc/LLM/Overview/FAQ/faq.md index 71348cca9df..62812e8dc3d 100644 --- a/docs/readthedocs/source/doc/LLM/Overview/FAQ/resolve_error.md +++ b/docs/readthedocs/source/doc/LLM/Overview/FAQ/faq.md @@ -1,8 +1,13 @@ -# FAQ: How to Resolve Errors +# Frequently Asked Questions (FAQ) -Refer to this section for common issues faced while using BigDL-LLM. +## General Info & Concepts -## Installation Error +### GGUF format usage with BigDL-LLM? + +BigDL-LLM supports running GGUF/AWQ/GPTQ models on both [CPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations) and [GPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations). +Please also refer to [here](https://github.com/intel-analytics/BigDL?tab=readme-ov-file#latest-update-) for our latest support. + +## How to Resolve Errors ### Fail to install `bigdl-llm` through `pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu` @@ -10,9 +15,6 @@ You could try to install BigDL-LLM dependencies for Intel XPU from source archiv - For Windows system, refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#install-bigdl-llm-from-wheel) for the steps. - For Linux system, refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#id3) for the steps. - -## Runtime Error - ### PyTorch is not linked with support for xpu devices 1. Before running on Intel GPUs, please make sure you've prepared environment follwing [installation instruction](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html). @@ -21,7 +23,7 @@ You could try to install BigDL-LLM dependencies for Intel XPU from source archiv 4. If you have mutil GPUs, you could refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/KeyFeatures/multi_gpus_selection.html) for details about GPU selection. 5. If you do inference using the optimized model on Intel GPUs, you also need to set `to('xpu')` for input tensors. -### import `intel_extension_for_pytorch` error on Windows GPU +### Import `intel_extension_for_pytorch` error on Windows GPU Please refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#error-loading-intel-extension-for-pytorch) for detailed guide. We list the possible missing requirements in environment which could lead to this error. @@ -50,7 +52,7 @@ This error is caused by out of GPU memory. Some possible solutions to decrease G 2. You could try `model = model.float16()` or `model = model.bfloat16()` before moving model to GPU to use less GPU memory. 3. You could try set `cpu_embedding=True` when call `from_pretrained` of AutoClass or `optimize_model` function. -### failed to enable AMX +### Failed to enable AMX You could use `export BIGDL_LLM_AMX_DISABLED=1` to disable AMX manually and solve this error. @@ -58,7 +60,7 @@ You could use `export BIGDL_LLM_AMX_DISABLED=1` to disable AMX manually and solv You may encounter this error during finetuning on multi GPUs. Please try `sudo apt install level-zero-dev` to fix it. -### random and unreadable output of Gemma-7b-it on Arc770 ubuntu 22.04 due to driver and OneAPI missmatching. +### Random and unreadable output of Gemma-7b-it on Arc770 ubuntu 22.04 due to driver and OneAPI missmatching. If driver and OneAPI missmatching, it will lead to some error when BigDL use XMX(short prompts) for speeding up. The output of `What's AI?` may like below: diff --git a/docs/readthedocs/source/doc/LLM/Overview/FAQ/general_info.md b/docs/readthedocs/source/doc/LLM/Overview/FAQ/general_info.md deleted file mode 100644 index b83e5462fd7..00000000000 --- a/docs/readthedocs/source/doc/LLM/Overview/FAQ/general_info.md +++ /dev/null @@ -1,10 +0,0 @@ -# FAQ: General Info & Concepts - -Refer to this section for general information about BigDL-LLM. - -## BigDL-LLM Support - -### GGUF format usage with BigDL-LLM? - -BigDL-LLM supports running GGUF/AWQ/GPTQ models on both [CPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations) and [GPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations). -Please also refer to [here](https://github.com/intel-analytics/BigDL?tab=readme-ov-file#latest-update-) for our latest support. diff --git a/docs/readthedocs/source/doc/LLM/Overview/FAQ/index.rst b/docs/readthedocs/source/doc/LLM/Overview/FAQ/index.rst deleted file mode 100644 index b6fa938427e..00000000000 --- a/docs/readthedocs/source/doc/LLM/Overview/FAQ/index.rst +++ /dev/null @@ -1,7 +0,0 @@ -Frequently Asked Questions (FAQ) -================================ - -You could refer to corresponding page to find solutions of your requirement: - -* `General Info & Concepts <./general_info.html>`_ -* `How to Resolve Errors <./resolve_error.html>`_ diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/index.rst b/docs/readthedocs/source/doc/LLM/Quickstart/index.rst new file mode 100644 index 00000000000..021c101263a --- /dev/null +++ b/docs/readthedocs/source/doc/LLM/Quickstart/index.rst @@ -0,0 +1,11 @@ +BigDL-LLM Quickstart +================================ + +.. note:: + + We are adding more Quickstart guide. + +This section includes efficient guide to show you how to: + +* `Install BigDL-LLM on Windows with Intel GPU <./install_windows_gpu.html>`_ +* `Use Text Generation WebUI on Windows with Intel GPU <./webui_quickstart.html>`_ \ No newline at end of file diff --git a/docs/readthedocs/source/doc/LLM/Quickstart/install_windows_gpu.md b/docs/readthedocs/source/doc/LLM/Quickstart/install_windows_gpu.md index 43761a7719f..9fddd21faf2 100644 --- a/docs/readthedocs/source/doc/LLM/Quickstart/install_windows_gpu.md +++ b/docs/readthedocs/source/doc/LLM/Quickstart/install_windows_gpu.md @@ -56,7 +56,7 @@ It applies to Intel Core Ultra and Core 12 - 14 gen integrated GPUs (iGPUs), as ```bash pip install --pre --upgrade bigdl-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/ ``` - > Note: If yuu encounter network issues while installing IPEX, refer to [this guide](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#install-bigdl-llm-from-wheel) for troubleshooting advice. + > Note: If you encounter network issues while installing IPEX, refer to [this guide](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#install-bigdl-llm-from-wheel) for troubleshooting advice. * You can verfy if bigdl-llm is successfully by simply importing a few classes from the library. For example, in the Python interactive shell, execute the following import command: ```python