From d412ad7069f439d94769950c32d5b456add8e096 Mon Sep 17 00:00:00 2001 From: ivy-lv11 Date: Tue, 4 Jun 2024 14:23:33 +0800 Subject: [PATCH] modify --- docs/docs/integrations/llms/ipex_llm_gpu.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/integrations/llms/ipex_llm_gpu.ipynb b/docs/docs/integrations/llms/ipex_llm_gpu.ipynb index 228811ab28067..25b34c1aa156a 100644 --- a/docs/docs/integrations/llms/ipex_llm_gpu.ipynb +++ b/docs/docs/integrations/llms/ipex_llm_gpu.ipynb @@ -125,7 +125,7 @@ "\n", "## Basic Usage\n", "\n", - "Setting `device` to `\"xpu\"` in `model_kwargs` when initializing `IpexLLM` will put the LLM model on Intel GPU and benefit from IPEX-LLM optimizations:" + "Setting `device` to `\"xpu\"` in `model_kwargs` when initializing `IpexLLM` will put the LLM model on Intel GPU and benefit from IPEX-LLM optimizations. Specify the prompt template for your model. In this example, we use the [vicuna-1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) model. If you're working with a different model, choose a proper template accordingly." ] }, {