diff --git a/docs/readthedocs/source/index.rst b/docs/readthedocs/source/index.rst index 186c11ebcec..a6e4f1e956e 100644 --- a/docs/readthedocs/source/index.rst +++ b/docs/readthedocs/source/index.rst @@ -76,36 +76,54 @@ Latest update 🔥 ``ipex-llm`` Demos ************************************************ -See the **optimized performance** of ``chatglm2-6b`` and ``llama-2-13b-chat`` models on 12th Gen Intel Core CPU and Intel Arc GPU below. +See demos of running local LLMs *on Intel Iris iGPU, Intel Core Ultra iGPU, single-card Arc GPU, or multi-card Arc GPUs* using `ipex-llm` below. .. raw:: html -
12th Gen Intel Core CPU | -Intel Arc GPU | -||
- - | -- - | -- - | -- - | -
chatglm2-6b |
- llama-2-13b-chat |
- chatglm2-6b |
- llama-2-13b-chat |
-
Intel Iris iGPU | +Intel Core Ultra iGPU | +Intel Arc dGPU | +2-Card Intel Arc dGPUs | +
+ + + + | ++ + + + | ++ + + + | ++ + + + | +
+ llama.cpp(Phi-3-mini Q4_0)
+ |
+
+ Ollama(Mistral-7B Q4_K)
+ |
+
+ TextGeneration-WebUI(Llama3-8B FP8)
+ |
+
+ FastChat(QWen1.5-32B FP6)
+ |