diff --git a/docs/readthedocs/source/index.rst b/docs/readthedocs/source/index.rst index 186c11ebcec..a6e4f1e956e 100644 --- a/docs/readthedocs/source/index.rst +++ b/docs/readthedocs/source/index.rst @@ -76,36 +76,54 @@ Latest update 🔥 ``ipex-llm`` Demos ************************************************ -See the **optimized performance** of ``chatglm2-6b`` and ``llama-2-13b-chat`` models on 12th Gen Intel Core CPU and Intel Arc GPU below. +See demos of running local LLMs *on Intel Iris iGPU, Intel Core Ultra iGPU, single-card Arc GPU, or multi-card Arc GPUs* using `ipex-llm` below. .. raw:: html - - - - - - - - - - - - - - - - - -
12th Gen Intel Core CPUIntel Arc GPU
- - - - - - - -
chatglm2-6bllama-2-13b-chatchatglm2-6bllama-2-13b-chat
+ + + + + + + + + + + + + + + + + + + +
Intel Iris iGPUIntel Core Ultra iGPUIntel Arc dGPU2-Card Intel Arc dGPUs
+ + + + + + + + + + + + + + + +
+ llama.cpp(Phi-3-mini Q4_0) + + Ollama(Mistral-7B Q4_K) + + TextGeneration-WebUI(Llama3-8B FP8) + + FastChat(QWen1.5-32B FP6) +
************************************************ ``ipex-llm`` Quickstart