-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Update part of Quickstart guide in mddocs (1/2)
* Quickstart index.rst -> index.md * Update for Linux Install Quickstart * Update md docs for Windows Install QuickStart * Small fix * Add blank lines * Update mddocs for llama cpp quickstart * Update mddocs for llama3 llama-cpp and ollama quickstart * Update mddocs for ollama quickstart * Update mddocs for openwebui quickstart * Update mddocs for privateGPT quickstart * Update mddocs for vllm quickstart * Small fix * Update mddocs for text-generation-webui quickstart * Update for video links
- Loading branch information
1 parent
f0fdfa0
commit 8c9f877
Showing
11 changed files
with
607 additions
and
824 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
# IPEX-LLM Quickstart | ||
|
||
> [!NOTE] | ||
> We are adding more Quickstart guide. | ||
This section includes efficient guide to show you how to: | ||
|
||
- [`bigdl-llm` Migration Guide](./bigdl_llm_migration.md) | ||
- [Install IPEX-LLM on Linux with Intel GPU](./install_linux_gpu.md) | ||
- [Install IPEX-LLM on Windows with Intel GPU](./install_windows_gpu.md) | ||
- [Install IPEX-LLM in Docker on Windows with Intel GPU](./docker_windows_gpu.md) | ||
- [Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL)](./docker_benchmark_quickstart.md) | ||
- [Run Performance Benchmarking with IPEX-LLM](./benchmark_quickstart.md) | ||
- [Run Local RAG using Langchain-Chatchat on Intel GPU](./chatchat_quickstart.md) | ||
- [Run Text Generation WebUI on Intel GPU](./webui_quickstart.md) | ||
- [Run Open WebUI on Intel GPU](./open_webui_with_ollama_quickstart.md) | ||
- [Run PrivateGPT with IPEX-LLM on Intel GPU](./privateGPT_quickstart.md) | ||
- [Run Coding Copilot (Continue) in VSCode with Intel GPU](./continue_quickstart.md) | ||
- [Run Dify on Intel GPU](./dify_quickstart.md) | ||
- [Run llama.cpp with IPEX-LLM on Intel GPU](./llama_cpp_quickstart.md) | ||
- [Run Ollama with IPEX-LLM on Intel GPU](./ollama_quickstart.md) | ||
- [Run Llama 3 on Intel GPU using llama.cpp and ollama with IPEX-LLM](./llama3_llamacpp_ollama_quickstart.md) | ||
- [Run IPEX-LLM Serving with FastChat](./fastchat_quickstart.md) | ||
- [Run IPEX-LLM Serving with vLLM on Intel GPU](./vLLM_quickstart.md) | ||
- [Finetune LLM with Axolotl on Intel GPU](./axolotl_quickstart.md) | ||
- [Run IPEX-LLM serving on Multiple Intel GPUs using DeepSpeed AutoTP and FastApi](./deepspeed_autotp_fastapi_quickstart.md) |
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.