Skip to content

Commit

Permalink
[LLM Doc] Restructure (intel-analytics#10322)
Browse files Browse the repository at this point in the history
* Add quick link guide to sidebar

* Add QuickStart to TOC

* Update quick links in main page

* Hide some section in More for top nav bar

* Resturct FAQ sections

* Small fix
  • Loading branch information
Oscilloscope98 authored Mar 5, 2024
1 parent af1d6d3 commit 549d997
Show file tree
Hide file tree
Showing 8 changed files with 45 additions and 120 deletions.
99 changes: 14 additions & 85 deletions docs/readthedocs/source/_templates/sidebar_quicklinks.html
Original file line number Diff line number Diff line change
Expand Up @@ -3,111 +3,40 @@
<div class="navbar-nav">
<ul class="nav">
<li>
<strong class="bigdl-quicklinks-section-title">LLM QuickStart</strong>
<input id="quicklink-cluster-llm" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-llm" class="toctree-toggle">
<strong class="bigdl-quicklinks-section-title">BigDL-LLM Quickstart</strong>
<input id="quicklink-cluster-llm-quickstart" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-llm-quickstart" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="nav bigdl-quicklinks-section-nav">
<li>
<a href="doc/LLM/Overview/llm.html">BigDL-LLM in 5 minutes</a>
<a href="doc/LLM/Quickstart/install_windows_gpu.html">Install BigDL-LLM on Windows with Intel GPU</a>
</li>
</ul>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">Orca QuickStart</strong>
<input id="quicklink-cluster-orca" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-orca" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="bigdl-quicklinks-section-nav">
<li>
<a href="doc/Orca/Howto/tf2keras-quickstart.html">Scale TensorFlow 2 Applications</a>
</li>
<li>
<a href="doc/Orca/Howto/pytorch-quickstart.html">Scale PyTorch Applications</a>
</li>
<li>
<a href="doc/Orca/Howto/ray-quickstart.html">Run Ray programs on Big Data clusters</a>
<a href="doc/LLM/Quickstart/webui_quickstart.html">Use Text Generation WebUI on Windows with Intel GPU</a>
</li>
</ul>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">Nano QuickStart</strong>
<input id="quicklink-cluster-nano" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-nano" class="toctree-toggle">
<strong class="bigdl-quicklinks-section-title">BigDL-LLM Installation</strong>
<input id="quicklink-cluster-llm-installation" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-llm-installation" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="nav bigdl-quicklinks-section-nav">
<li>
<a href="doc/Nano/QuickStart/pytorch_train_quickstart.html">PyTorch Training Acceleration</a>
</li>
<li>
<a href="doc/Nano/QuickStart/pytorch_quantization_inc_onnx.html">PyTorch Inference Quantization
with ONNXRuntime Acceleration </a>
</li>
<li>
<a href="doc/Nano/QuickStart/pytorch_openvino.html">PyTorch Inference Acceleration using
OpenVINO</a>
</li>
<li>
<a href="doc/Nano/QuickStart/tensorflow_train_quickstart.html">Tensorflow Training
Acceleration</a>
</li>
<li>
<a href="doc/Nano/QuickStart/tensorflow_quantization_quickstart.html">Tensorflow Quantization
Acceleration</a>
</li>
</ul>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">DLlib QuickStart</strong>
<input id="quicklink-cluster-dllib" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-dllib" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="nav bigdl-quicklinks-section-nav">
<li>
<a href="doc/DLlib/QuickStart/python-getting-started.html">Python QuickStart</a>
</li>
<li>
<a href="doc/DLlib/QuickStart/scala-getting-started.html">Scala QuickStart</a>
</li>
</ul>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">Chronos QuickStart</strong>
<input id="quicklink-cluster-chronos" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-chronos" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="nav bigdl-quicklinks-section-nav">
<ul class="bigdl-quicklinks-section-nav">
<li>
<a href="doc/Chronos/QuickStart/chronos-tsdataset-forecaster-quickstart.html">Basic
Forecasting</a>
<a href="doc/LLM/Overview/install_cpu.html">CPU</a>
</li>
<li>
<a href="doc/Chronos/QuickStart/chronos-autotsest-quickstart.html">Forecasting using AutoML</a>
<a href="doc/LLM/Overview/install_gpu.html">GPU</a>
</li>
<li>
<a href="doc/Chronos/QuickStart/chronos-anomaly-detector.html">Anomaly Detection</a>
</li>
</ul>
</li>
<li>
<strong class="bigdl-quicklinks-section-title">PPML QuickStart</strong>
<input id="quicklink-cluster-ppml" type="checkbox" class="toctree-checkbox" />
<label for="quicklink-cluster-ppml" class="toctree-toggle">
<i class="fa-solid fa-chevron-down"></i>
</label>
<ul class="nav bigdl-quicklinks-section-nav">
<li>
<a href="doc/PPML/Overview/quicktour.html">Hello World Example</a>
</li>
<li>
<a href="doc/PPML/QuickStart/end-to-end.html">End-to-End Example</a>
</li>
</ul>
<a href="doc/LLM/Overview/FAQ/faq.html">
<strong class="bigdl-quicklinks-section-title">BigDL-LLM FAQ</strong>
</a>
</li>
</ul>
</div>
Expand Down
14 changes: 7 additions & 7 deletions docs/readthedocs/source/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,12 @@ subtrees:
title: "CPU"
- file: doc/LLM/Overview/install_gpu
title: "GPU"
- file: doc/LLM/Quickstart/index
title: "Quickstart"
subtrees:
- entries:
- file: doc/LLM/Quickstart/install_windows_gpu
- file: doc/LLM/Quickstart/webui_quickstart
- file: doc/LLM/Overview/KeyFeatures/index
title: "Key Features"
subtrees:
Expand Down Expand Up @@ -64,14 +70,8 @@ subtrees:
# title: "Tips and Known Issues"
- file: doc/PythonAPI/LLM/index
title: "API Reference"
- file: doc/LLM/Overview/FAQ/index
- file: doc/LLM/Overview/FAQ/faq
title: "FAQ"
subtrees:
- entries:
- file: doc/LLM/Overview/FAQ/general_info
title: "General Info & Concepts"
- file: doc/LLM/Overview/FAQ/resolve_error
title: "How to Resolve Errors"

- entries:
- file: doc/Orca/index
Expand Down
2 changes: 1 addition & 1 deletion docs/readthedocs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
# -- Project information -----------------------------------------------------
html_theme = "pydata_sphinx_theme"
html_theme_options = {
"header_links_before_dropdown": 9,
"header_links_before_dropdown": 3,
"icon_links": [
{
"name": "GitHub Repository for BigDL",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,18 +1,20 @@
# FAQ: How to Resolve Errors
# Frequently Asked Questions (FAQ)

Refer to this section for common issues faced while using BigDL-LLM.
## General Info & Concepts

## Installation Error
### GGUF format usage with BigDL-LLM?

BigDL-LLM supports running GGUF/AWQ/GPTQ models on both [CPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations) and [GPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations).
Please also refer to [here](https://github.com/intel-analytics/BigDL?tab=readme-ov-file#latest-update-) for our latest support.

## How to Resolve Errors

### Fail to install `bigdl-llm` through `pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu`

You could try to install BigDL-LLM dependencies for Intel XPU from source archives:
- For Windows system, refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#install-bigdl-llm-from-wheel) for the steps.
- For Linux system, refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#id3) for the steps.


## Runtime Error

### PyTorch is not linked with support for xpu devices

1. Before running on Intel GPUs, please make sure you've prepared environment follwing [installation instruction](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html).
Expand All @@ -21,7 +23,7 @@ You could try to install BigDL-LLM dependencies for Intel XPU from source archiv
4. If you have mutil GPUs, you could refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/KeyFeatures/multi_gpus_selection.html) for details about GPU selection.
5. If you do inference using the optimized model on Intel GPUs, you also need to set `to('xpu')` for input tensors.

### import `intel_extension_for_pytorch` error on Windows GPU
### Import `intel_extension_for_pytorch` error on Windows GPU

Please refer to [here](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#error-loading-intel-extension-for-pytorch) for detailed guide. We list the possible missing requirements in environment which could lead to this error.

Expand Down Expand Up @@ -50,15 +52,15 @@ This error is caused by out of GPU memory. Some possible solutions to decrease G
2. You could try `model = model.float16()` or `model = model.bfloat16()` before moving model to GPU to use less GPU memory.
3. You could try set `cpu_embedding=True` when call `from_pretrained` of AutoClass or `optimize_model` function.

### failed to enable AMX
### Failed to enable AMX

You could use `export BIGDL_LLM_AMX_DISABLED=1` to disable AMX manually and solve this error.

### oneCCL: comm_selector.cpp:57 create_comm_impl: EXCEPTION: ze_data was not initialized

You may encounter this error during finetuning on multi GPUs. Please try `sudo apt install level-zero-dev` to fix it.

### random and unreadable output of Gemma-7b-it on Arc770 ubuntu 22.04 due to driver and OneAPI missmatching.
### Random and unreadable output of Gemma-7b-it on Arc770 ubuntu 22.04 due to driver and OneAPI missmatching.

If driver and OneAPI missmatching, it will lead to some error when BigDL use XMX(short prompts) for speeding up.
The output of `What's AI?` may like below:
Expand Down
10 changes: 0 additions & 10 deletions docs/readthedocs/source/doc/LLM/Overview/FAQ/general_info.md

This file was deleted.

7 changes: 0 additions & 7 deletions docs/readthedocs/source/doc/LLM/Overview/FAQ/index.rst

This file was deleted.

11 changes: 11 additions & 0 deletions docs/readthedocs/source/doc/LLM/Quickstart/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
BigDL-LLM Quickstart
================================

.. note::

We are adding more Quickstart guide.

This section includes efficient guide to show you how to:

* `Install BigDL-LLM on Windows with Intel GPU <./install_windows_gpu.html>`_
* `Use Text Generation WebUI on Windows with Intel GPU <./webui_quickstart.html>`_
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ It applies to Intel Core Ultra and Core 12 - 14 gen integrated GPUs (iGPUs), as
```bash
pip install --pre --upgrade bigdl-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
```
> Note: If yuu encounter network issues while installing IPEX, refer to [this guide](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#install-bigdl-llm-from-wheel) for troubleshooting advice.
> Note: If you encounter network issues while installing IPEX, refer to [this guide](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html#install-bigdl-llm-from-wheel) for troubleshooting advice.

* You can verfy if bigdl-llm is successfully by simply importing a few classes from the library. For example, in the Python interactive shell, execute the following import command:
```python
Expand Down

0 comments on commit 549d997

Please sign in to comment.