diff --git a/README.md b/README.md
index 6a70f2d0c59..61eef9117e0 100644
--- a/README.md
+++ b/README.md
@@ -177,6 +177,7 @@ Over 50 models have been optimized/verified on `ipex-llm`, including *LLaMA/LLaM
| DeepSeek-MoE | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/deepseek-moe) | |
| Ziya-Coding-34B-v1.0 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/ziya) | |
| Phi-2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-2) |
+| Phi-3 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3) |
| Yuan2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/yuan2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/yuan2) |
| Gemma | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/gemma) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/gemma) |
| DeciLM-7B | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/deciLM-7b) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/deciLM-7b) |
diff --git a/docs/readthedocs/source/index.rst b/docs/readthedocs/source/index.rst
index 383dbf918f4..5a307f3f8b1 100644
--- a/docs/readthedocs/source/index.rst
+++ b/docs/readthedocs/source/index.rst
@@ -538,6 +538,13 @@ Verified Models
Yuan2 |
diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3/README.md
new file mode 100644
index 00000000000..8794d02ce74
--- /dev/null
+++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3/README.md
@@ -0,0 +1,71 @@
+# phi-3
+
+In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on phi-3 models. For illustration purposes, we utilize the [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) as a reference phi-3 model.
+
+> **Note**: If you want to download the Hugging Face *Transformers* model, please refer to [here](https://huggingface.co/docs/hub/models-downloading#using-git).
+>
+> IPEX-LLM optimizes the *Transformers* model in INT4 precision at runtime, and thus no explicit conversion is needed.
+
+## Requirements
+To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
+
+## Example: Predict Tokens using `generate()` API
+In the example [generate.py](./generate.py), we show a basic use case for a phi-3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
+### 1. Install
+We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
+
+After installing conda, create a Python environment for IPEX-LLM:
+```bash
+conda create -n llm python=3.11 # recommend to use Python 3.11
+conda activate llm
+
+pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
+
+pip install transformers==4.37.0
+```
+
+### 2. Run
+After setting up the Python environment, you could run the example by following steps.
+
+> **Note**: When loading the model in 4-bit, IPEX-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference.
+>
+> Please select the appropriate size of the phi-3 model based on the capabilities of your machine.
+
+#### 2.1 Client
+On client Windows machines, it is recommended to run directly with full utilization of all cores:
+```powershell
+python ./generate.py --prompt 'What is AI?'
+```
+More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
+
+#### 2.2 Server
+For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
+
+E.g. on Linux,
+```bash
+# set IPEX-LLM env variables
+source ipex-llm-init
+
+# e.g. for a server with 48 cores per socket
+export OMP_NUM_THREADS=48
+numactl -C 0-47 -m 0 python ./generate.py --prompt 'What is AI?'
+```
+More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
+
+#### 2.3 Arguments Info
+In the example, several arguments can be passed to satisfy your requirements:
+
+- `--repo-id-or-model-path`: str, argument defining the huggingface repo id for the phi-3 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'microsoft/Phi-3-mini-4k-instruct'`.
+- `--prompt`: str, argument defining the prompt to be inferred (with integrated prompt format for chat). It is default to be `What is AI?`.
+- `--n-predict`: int, argument defining the max number of tokens to predict. It is default to be `32`.
+
+#### 2.4 Sample Output
+#### [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
+```log
+-------------------- Prompt --------------------
+<|user|>
+What is AI?<|end|>
+<|assistant|>
+-------------------- Output --------------------
+<|user|> What is AI?<|end|><|assistant|> AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. The goal
+```
\ No newline at end of file
diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3/generate.py b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3/generate.py
new file mode 100644
index 00000000000..4dfec157b88
--- /dev/null
+++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3/generate.py
@@ -0,0 +1,68 @@
+#
+# Copyright 2016 The BigDL Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import torch
+import time
+import argparse
+
+from ipex_llm.transformers import AutoModelForCausalLM
+from transformers import AutoTokenizer
+
+# you could tune the prompt based on your own model,
+# here the prompt tuning refers to https://huggingface.co/microsoft/Phi-3-mini-4k-instruct#chat-format
+PHI3_PROMPT_FORMAT = "<|user|>\n{prompt}<|end|>\n<|assistant|>"
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for phi-3 model')
+ parser.add_argument('--repo-id-or-model-path', type=str, default="microsoft/Phi-3-mini-4k-instruct",
+ help='The huggingface repo id for the phi-3 model to be downloaded'
+ ', or the path to the huggingface checkpoint folder')
+ parser.add_argument('--prompt', type=str, default="What is AI?",
+ help='Prompt to infer')
+ parser.add_argument('--n-predict', type=int, default=32,
+ help='Max tokens to predict')
+
+ args = parser.parse_args()
+ model_path = args.repo_id_or_model_path
+
+ # Load model in 4 bit,
+ # which convert the relevant layers in the model into INT4 format
+ model = AutoModelForCausalLM.from_pretrained(model_path,
+ load_in_4bit=True,
+ optimize_model=True,
+ trust_remote_code=True,
+ use_cache=True)
+
+ # Load tokenizer
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
+ trust_remote_code=True)
+
+ # Generate predicted tokens
+ with torch.inference_mode():
+ prompt = PHI3_PROMPT_FORMAT.format(prompt=args.prompt)
+ input_ids = tokenizer.encode(prompt, return_tensors="pt")
+ st = time.time()
+
+ output = model.generate(input_ids,
+ do_sample=False,
+ max_new_tokens=args.n_predict)
+ end = time.time()
+ output_str = tokenizer.decode(output[0], skip_special_tokens=False)
+ print(f'Inference time: {end-st} s')
+ print('-'*20, 'Prompt', '-'*20)
+ print(prompt)
+ print('-'*20, 'Output', '-'*20)
+ print(output_str)
diff --git a/python/llm/example/CPU/PyTorch-Models/Model/phi-3/README.md b/python/llm/example/CPU/PyTorch-Models/Model/phi-3/README.md
new file mode 100644
index 00000000000..f9bb937f581
--- /dev/null
+++ b/python/llm/example/CPU/PyTorch-Models/Model/phi-3/README.md
@@ -0,0 +1,67 @@
+# phi-3
+
+In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on phi-3 models. For illustration purposes, we utilize the [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) as a reference phi-3 model.
+
+> **Note**: If you want to download the Hugging Face *Transformers* model, please refer to [here](https://huggingface.co/docs/hub/models-downloading#using-git).
+>
+> IPEX-LLM optimizes the *Transformers* model in INT4 precision at runtime, and thus no explicit conversion is needed.
+
+## Requirements
+To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
+
+## Example: Predict Tokens using `generate()` API
+In the example [generate.py](./generate.py), we show a basic use case for a phi-3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
+### 1. Install
+We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
+
+After installing conda, create a Python environment for IPEX-LLM:
+```bash
+conda create -n llm python=3.11 # recommend to use Python 3.11
+conda activate llm
+
+pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
+
+pip install transformers==4.37.0
+```
+
+### 2. Run
+After setting up the Python environment, you could run the example by following steps.
+
+#### 2.1 Client
+On client Windows machines, it is recommended to run directly with full utilization of all cores:
+```powershell
+python ./generate.py --prompt 'What is AI?'
+```
+More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
+
+#### 2.2 Server
+For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
+
+E.g. on Linux,
+```bash
+# set IPEX-LLM env variables
+source ipex-llm-init
+
+# e.g. for a server with 48 cores per socket
+export OMP_NUM_THREADS=48
+numactl -C 0-47 -m 0 python ./generate.py --prompt 'What is AI?'
+```
+More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
+
+#### 2.3 Arguments Info
+In the example, several arguments can be passed to satisfy your requirements:
+
+- `--repo-id-or-model-path`: str, argument defining the huggingface repo id for the phi-3 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'microsoft/Phi-3-mini-4k-instruct'`.
+- `--prompt`: str, argument defining the prompt to be inferred (with integrated prompt format for chat). It is default to be `What is AI?`.
+- `--n-predict`: int, argument defining the max number of tokens to predict. It is default to be `32`.
+
+#### 2.4 Sample Output
+#### [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
+```log
+-------------------- Prompt --------------------
+<|user|>
+What is AI?<|end|>
+<|assistant|>
+-------------------- Output --------------------
+<|user|> What is AI?<|end|><|assistant|> AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. The goal
+```
diff --git a/python/llm/example/CPU/PyTorch-Models/Model/phi-3/generate.py b/python/llm/example/CPU/PyTorch-Models/Model/phi-3/generate.py
new file mode 100644
index 00000000000..f957d8a6a5e
--- /dev/null
+++ b/python/llm/example/CPU/PyTorch-Models/Model/phi-3/generate.py
@@ -0,0 +1,70 @@
+#
+# Copyright 2016 The BigDL Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import torch
+import time
+import argparse
+
+from transformers import AutoTokenizer, AutoModelForCausalLM
+from ipex_llm import optimize_model
+
+# you could tune the prompt based on your own model,
+# here the prompt tuning refers to https://huggingface.co/microsoft/Phi-3-mini-4k-instruct#chat-format
+PHI3_PROMPT_FORMAT = "<|user|>\n{prompt}<|end|>\n<|assistant|>"
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for phi-3 model')
+ parser.add_argument('--repo-id-or-model-path', type=str, default="microsoft/Phi-3-mini-4k-instruct",
+ help='The huggingface repo id for the phi-3 model to be downloaded'
+ ', or the path to the huggingface checkpoint folder')
+ parser.add_argument('--prompt', type=str, default="What is AI?",
+ help='Prompt to infer')
+ parser.add_argument('--n-predict', type=int, default=32,
+ help='Max tokens to predict')
+
+ args = parser.parse_args()
+ model_path = args.repo_id_or_model_path
+
+ # Load model
+ model = AutoModelForCausalLM.from_pretrained(model_path,
+ trust_remote_code=True,
+ torch_dtype='auto',
+ low_cpu_mem_usage=True,
+ use_cache=True)
+
+ # With only one line to enable IPEX-LLM optimization on model
+ model = optimize_model(model)
+
+ # Load tokenizer
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
+ trust_remote_code=True)
+
+ # Generate predicted tokens
+ with torch.inference_mode():
+ prompt = PHI3_PROMPT_FORMAT.format(prompt=args.prompt)
+ input_ids = tokenizer.encode(prompt, return_tensors="pt")
+ st = time.time()
+
+ output = model.generate(input_ids,
+ do_sample=False,
+ max_new_tokens=args.n_predict)
+ end = time.time()
+ output_str = tokenizer.decode(output[0], skip_special_tokens=False)
+ print(f'Inference time: {end-st} s')
+ print('-'*20, 'Prompt', '-'*20)
+ print(prompt)
+ print('-'*20, 'Output', '-'*20)
+ print(output_str)
diff --git a/python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3/README.md b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3/README.md
new file mode 100644
index 00000000000..a05ab6d2c37
--- /dev/null
+++ b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3/README.md
@@ -0,0 +1,131 @@
+# phi-3
+In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on phi-3 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) as a reference phi-3 model.
+
+## 0. Requirements
+To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
+
+## Example: Predict Tokens using `generate()` API
+In the example [generate.py](./generate.py), we show a basic use case for a phi-3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
+### 1. Install
+#### 1.1 Installation on Linux
+We suggest using conda to manage environment:
+```bash
+conda create -n llm python=3.11
+conda activate llm
+# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
+pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
+
+pip install transformers==4.37.0
+```
+
+#### 1.2 Installation on Windows
+We suggest using conda to manage environment:
+```bash
+conda create -n llm python=3.11 libuv
+conda activate llm
+# below command will use pip to install the Intel oneAPI Base Toolkit 2024.0
+pip install dpcpp-cpp-rt==2024.0.2 mkl-dpcpp==2024.0.0 onednn==2024.0.0
+
+# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
+pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
+
+pip install transformers==4.37.0
+```
+
+### 2. Configures OneAPI environment variables for Linux
+
+> [!NOTE]
+> Skip this step if you are running on Windows.
+
+This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
+
+```bash
+source /opt/intel/oneapi/setvars.sh
+```
+
+### 3. Runtime Configurations
+For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
+#### 3.1 Configurations for Linux
+
+
+For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series
+
+```bash
+export USE_XETLA=OFF
+export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
+export SYCL_CACHE_PERSISTENT=1
+```
+
+
+
+
+
+For Intel Data Center GPU Max Series
+
+```bash
+export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
+export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
+export SYCL_CACHE_PERSISTENT=1
+export ENABLE_SDP_FUSION=1
+```
+> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
+
+
+
+
+For Intel iGPU
+
+```bash
+export SYCL_CACHE_PERSISTENT=1
+export BIGDL_LLM_XMX_DISABLED=1
+```
+
+
+
+#### 3.2 Configurations for Windows
+
+
+For Intel iGPU
+
+```cmd
+set SYCL_CACHE_PERSISTENT=1
+set BIGDL_LLM_XMX_DISABLED=1
+```
+
+
+
+
+
+For Intel Arc™ A-Series Graphics
+
+```cmd
+set SYCL_CACHE_PERSISTENT=1
+```
+
+
+
+> [!NOTE]
+> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
+### 4. Running examples
+
+```
+python ./generate.py --prompt 'What is AI?'
+```
+
+Arguments info:
+- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the phi-3 model (e.g. `microsoft/Phi-3-mini-4k-instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'microsoft/Phi-3-mini-4k-instruct'`.
+- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
+- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
+
+#### Sample Output
+#### [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
+
+```log
+Inference time: xxxx s
+-------------------- Prompt --------------------
+<|user|>
+What is AI?<|end|>
+<|assistant|>
+-------------------- Output --------------------
+<|user|> What is AI?<|end|><|assistant|> AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. The goal
+```
diff --git a/python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3/generate.py b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3/generate.py
new file mode 100644
index 00000000000..1efba61b943
--- /dev/null
+++ b/python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3/generate.py
@@ -0,0 +1,78 @@
+#
+# Copyright 2016 The BigDL Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import torch
+import time
+import argparse
+
+from ipex_llm.transformers import AutoModelForCausalLM
+from transformers import AutoTokenizer
+
+# you could tune the prompt based on your own model,
+# here the prompt tuning refers to https://huggingface.co/microsoft/Phi-3-mini-4k-instruct#chat-format
+PHI3_PROMPT_FORMAT = "<|user|>\n{prompt}<|end|>\n<|assistant|>"
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for phi-3 model')
+ parser.add_argument('--repo-id-or-model-path', type=str, default="microsoft/Phi-3-mini-4k-instruct",
+ help='The huggingface repo id for the phi-3 model to be downloaded'
+ ', or the path to the huggingface checkpoint folder')
+ parser.add_argument('--prompt', type=str, default="What is AI?",
+ help='Prompt to infer')
+ parser.add_argument('--n-predict', type=int, default=32,
+ help='Max tokens to predict')
+
+ args = parser.parse_args()
+ model_path = args.repo_id_or_model_path
+
+ # Load model in 4 bit,
+ # which convert the relevant layers in the model into INT4 format
+ # When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the from_pretrained function.
+ # This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
+ model = AutoModelForCausalLM.from_pretrained(model_path,
+ load_in_4bit=True,
+ trust_remote_code=True,
+ optimize_model=True,
+ use_cache=True)
+
+ model = model.to('xpu')
+
+ # Load tokenizer
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
+ trust_remote_code=True)
+
+ # Generate predicted tokens
+ with torch.inference_mode():
+ prompt = PHI3_PROMPT_FORMAT.format(prompt=args.prompt)
+ input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
+
+ # ipex_llm model needs a warmup, then inference time can be accurate
+ output = model.generate(input_ids,
+ max_new_tokens=args.n_predict)
+ # start inference
+ st = time.time()
+
+ output = model.generate(input_ids,
+ do_sample=False,
+ max_new_tokens=args.n_predict)
+ torch.xpu.synchronize()
+ end = time.time()
+ output_str = tokenizer.decode(output[0], skip_special_tokens=False)
+ print(f'Inference time: {end-st} s')
+ print('-'*20, 'Prompt', '-'*20)
+ print(prompt)
+ print('-'*20, 'Output', '-'*20)
+ print(output_str)
diff --git a/python/llm/example/GPU/PyTorch-Models/Model/phi-3/README.md b/python/llm/example/GPU/PyTorch-Models/Model/phi-3/README.md
new file mode 100644
index 00000000000..ed8051b66ea
--- /dev/null
+++ b/python/llm/example/GPU/PyTorch-Models/Model/phi-3/README.md
@@ -0,0 +1,131 @@
+# phi-3
+In this directory, you will find examples on how you could use IPEX-LLM `optimize_model` API to accelerate phi-3 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) as a reference phi-3 model.
+
+## 0. Requirements
+To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
+
+## Example: Predict Tokens using `generate()` API
+In the example [generate.py](./generate.py), we show a basic use case for a phi-3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
+### 1. Install
+#### 1.1 Installation on Linux
+We suggest using conda to manage environment:
+```bash
+conda create -n llm python=3.11
+conda activate llm
+# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
+pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
+
+pip install transformers==4.37.0
+```
+
+#### 1.2 Installation on Windows
+We suggest using conda to manage environment:
+```bash
+conda create -n llm python=3.11 libuv
+conda activate llm
+# below command will use pip to install the Intel oneAPI Base Toolkit 2024.0
+pip install dpcpp-cpp-rt==2024.0.2 mkl-dpcpp==2024.0.0 onednn==2024.0.0
+
+# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
+pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
+
+pip install transformers==4.37.0
+```
+
+### 2. Configures OneAPI environment variables for Linux
+
+> [!NOTE]
+> Skip this step if you are running on Windows.
+
+This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
+
+```bash
+source /opt/intel/oneapi/setvars.sh
+```
+
+### 3. Runtime Configurations
+For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
+#### 3.1 Configurations for Linux
+
+
+For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series
+
+```bash
+export USE_XETLA=OFF
+export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
+export SYCL_CACHE_PERSISTENT=1
+```
+
+
+
+
+
+For Intel Data Center GPU Max Series
+
+```bash
+export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
+export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
+export SYCL_CACHE_PERSISTENT=1
+export ENABLE_SDP_FUSION=1
+```
+> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
+
+
+
+
+For Intel iGPU
+
+```bash
+export SYCL_CACHE_PERSISTENT=1
+export BIGDL_LLM_XMX_DISABLED=1
+```
+
+
+
+#### 3.2 Configurations for Windows
+
+
+For Intel iGPU
+
+```cmd
+set SYCL_CACHE_PERSISTENT=1
+set BIGDL_LLM_XMX_DISABLED=1
+```
+
+
+
+
+
+For Intel Arc™ A-Series Graphics
+
+```cmd
+set SYCL_CACHE_PERSISTENT=1
+```
+
+
+
+> [!NOTE]
+> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
+### 4. Running examples
+
+```
+python ./generate.py --prompt 'What is AI?'
+```
+
+Arguments info:
+- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the phi-3 model (e.g. `microsoft/Phi-3-mini-4k-instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'microsoft/Phi-3-mini-4k-instruct'`.
+- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
+- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
+
+#### Sample Output
+#### [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
+
+```log
+Inference time: xxxx s
+-------------------- Prompt --------------------
+<|user|>
+What is AI?<|end|>
+<|assistant|>
+-------------------- Output --------------------
+<|user|> What is AI?<|end|><|assistant|> AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. The goal
+```
diff --git a/python/llm/example/GPU/PyTorch-Models/Model/phi-3/generate.py b/python/llm/example/GPU/PyTorch-Models/Model/phi-3/generate.py
new file mode 100644
index 00000000000..2619955a495
--- /dev/null
+++ b/python/llm/example/GPU/PyTorch-Models/Model/phi-3/generate.py
@@ -0,0 +1,79 @@
+#
+# Copyright 2016 The BigDL Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import torch
+import time
+import argparse
+
+from transformers import AutoModelForCausalLM, AutoTokenizer
+from ipex_llm import optimize_model
+
+# you could tune the prompt based on your own model,
+# here the prompt tuning refers to https://huggingface.co/microsoft/Phi-3-mini-4k-instruct#chat-format
+PHI3_PROMPT_FORMAT = "<|user|>\n{prompt}<|end|>\n<|assistant|>"
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for phi-3 model')
+ parser.add_argument('--repo-id-or-model-path', type=str, default="microsoft/Phi-3-mini-4k-instruct",
+ help='The huggingface repo id for the phi-3 model to be downloaded'
+ ', or the path to the huggingface checkpoint folder')
+ parser.add_argument('--prompt', type=str, default="What is AI?",
+ help='Prompt to infer')
+ parser.add_argument('--n-predict', type=int, default=32,
+ help='Max tokens to predict')
+
+ args = parser.parse_args()
+ model_path = args.repo_id_or_model_path
+
+ # Load model
+ model = AutoModelForCausalLM.from_pretrained(model_path,
+ trust_remote_code=True,
+ torch_dtype='auto',
+ low_cpu_mem_usage=True,
+ use_cache=True)
+
+ # With only one line to enable IPEX-LLM optimization on model
+ # When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the optimize_model function.
+ # This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
+ model = optimize_model(model)
+ model = model.to('xpu')
+
+ # Load tokenizer
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
+ trust_remote_code=True)
+
+ # Generate predicted tokens
+ with torch.inference_mode():
+ prompt = PHI3_PROMPT_FORMAT.format(prompt=args.prompt)
+ input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
+
+ # ipex_llm model needs a warmup, then inference time can be accurate
+ output = model.generate(input_ids,
+ max_new_tokens=args.n_predict)
+ # start inference
+ st = time.time()
+
+ output = model.generate(input_ids,
+ do_sample=False,
+ max_new_tokens=args.n_predict)
+ torch.xpu.synchronize()
+ end = time.time()
+ output_str = tokenizer.decode(output[0], skip_special_tokens=False)
+ print(f'Inference time: {end-st} s')
+ print('-'*20, 'Prompt', '-'*20)
+ print(prompt)
+ print('-'*20, 'Output', '-'*20)
+ print(output_str)
|