diff --git a/README.md b/README.md index f09f1627922..4cdbb2b20c0 100644 --- a/README.md +++ b/README.md @@ -162,6 +162,7 @@ Please see the **Perplexity** result below (tested on Wikitext dataset using the ### Docker - [GPU Inference in C++](docs/mddocs/DockerGuides/docker_cpp_xpu_quickstart.md): running `llama.cpp`, `ollama`, `OpenWebUI`, etc., with `ipex-llm` on Intel GPU - [GPU Inference in Python](docs/mddocs/DockerGuides/docker_pytorch_inference_gpu.md) : running HuggingFace `transformers`, `LangChain`, `LlamaIndex`, `ModelScope`, etc. with `ipex-llm` on Intel GPU +- [VSCode Guide on GPU](docs/readthedocs/source/doc/LLM/DockerGuides/docker_run_pytorch_inference_in_vscode.md): running and developing Python LLM applications using VSCode on Intel GPU - [vLLM on GPU](docs/mddocs/DockerGuides/vllm_docker_quickstart.md): running `vLLM` serving with `ipex-llm` on Intel GPU - [FastChat on GPU](docs/mddocs/DockerGuides/fastchat_docker_quickstart.md): running `FastChat` serving with `ipex-llm` on Intel GPU @@ -219,6 +220,13 @@ Please see the **Perplexity** result below (tested on Wikitext dataset using the - [Tutorials](https://github.com/intel-analytics/ipex-llm-tutorial) +## API Doc + +- [HuggingFace Transformers-style API (Auto Classes)](docs/mddocs/PythonAPI/transformers.md) +- [API for arbitrary PyTorch Model](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/PythonAPI/optimize.md) + +## FAQ +- [FAQ & Trouble Shooting](docs/mddocs/Overview/FAQ/faq.md) ## Verified Models Over 50 models have been optimized/verified on `ipex-llm`, including *LLaMA/LLaMA2, Mistral, Mixtral, Gemma, LLaVA, Whisper, ChatGLM2/ChatGLM3, Baichuan/Baichuan2, Qwen/Qwen-1.5, InternLM* and more; see the list below. diff --git a/docs/mddocs/Overview/FAQ/faq.md b/docs/mddocs/Overview/FAQ/faq.md index 284cb841b87..ab8f0df3385 100644 --- a/docs/mddocs/Overview/FAQ/faq.md +++ b/docs/mddocs/Overview/FAQ/faq.md @@ -10,8 +10,19 @@ Please also refer to [here](https://github.com/intel-analytics/ipex-llm?tab=read ## How to Resolve Errors -### Fail to install `ipex-llm` through `pip install --pre --upgrade ipex-llm[xpu] --extra-index-urlhttps://pytorch-extension.intel.com/release-whl/stable/xpu/us/` or `pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/` -You could try to install IPEX-LLM dependencies for Intel XPU from source archives: +### Fail to install `ipex-llm` via `pip` on Intel GPU + +If you encounter errors when installing `ipex-llm` on Intel GPU using either + +```python +pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ +``` +or +```python +pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/ +``` + +You can try install `ipex-llm` dependencies from source archives: - For Windows system, refer to [here](../install_gpu.md#install-ipex-llm-from-wheel) for the steps. - For Linux system, refer to [here](../install_gpu.md#prerequisites-1) for the steps.