-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Add example for phi-3 * add in readme and index * fix * fix * fix * fix indent * fix
- Loading branch information
1 parent
c936ba3
commit 1f876fd
Showing
10 changed files
with
703 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
71 changes: 71 additions & 0 deletions
71
python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,71 @@ | ||
# phi-3 | ||
|
||
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on phi-3 models. For illustration purposes, we utilize the [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) as a reference phi-3 model. | ||
|
||
> **Note**: If you want to download the Hugging Face *Transformers* model, please refer to [here](https://huggingface.co/docs/hub/models-downloading#using-git). | ||
> | ||
> IPEX-LLM optimizes the *Transformers* model in INT4 precision at runtime, and thus no explicit conversion is needed. | ||
## Requirements | ||
To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information. | ||
|
||
## Example: Predict Tokens using `generate()` API | ||
In the example [generate.py](./generate.py), we show a basic use case for a phi-3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. | ||
### 1. Install | ||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). | ||
|
||
After installing conda, create a Python environment for IPEX-LLM: | ||
```bash | ||
conda create -n llm python=3.11 # recommend to use Python 3.11 | ||
conda activate llm | ||
|
||
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option | ||
|
||
pip install transformers==4.37.0 | ||
``` | ||
|
||
### 2. Run | ||
After setting up the Python environment, you could run the example by following steps. | ||
|
||
> **Note**: When loading the model in 4-bit, IPEX-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference. | ||
> | ||
> Please select the appropriate size of the phi-3 model based on the capabilities of your machine. | ||
#### 2.1 Client | ||
On client Windows machines, it is recommended to run directly with full utilization of all cores: | ||
```powershell | ||
python ./generate.py --prompt 'What is AI?' | ||
``` | ||
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section. | ||
|
||
#### 2.2 Server | ||
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket. | ||
|
||
E.g. on Linux, | ||
```bash | ||
# set IPEX-LLM env variables | ||
source ipex-llm-init | ||
|
||
# e.g. for a server with 48 cores per socket | ||
export OMP_NUM_THREADS=48 | ||
numactl -C 0-47 -m 0 python ./generate.py --prompt 'What is AI?' | ||
``` | ||
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section. | ||
|
||
#### 2.3 Arguments Info | ||
In the example, several arguments can be passed to satisfy your requirements: | ||
|
||
- `--repo-id-or-model-path`: str, argument defining the huggingface repo id for the phi-3 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'microsoft/Phi-3-mini-4k-instruct'`. | ||
- `--prompt`: str, argument defining the prompt to be inferred (with integrated prompt format for chat). It is default to be `What is AI?`. | ||
- `--n-predict`: int, argument defining the max number of tokens to predict. It is default to be `32`. | ||
|
||
#### 2.4 Sample Output | ||
#### [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) | ||
```log | ||
-------------------- Prompt -------------------- | ||
<|user|> | ||
What is AI?<|end|> | ||
<|assistant|> | ||
-------------------- Output -------------------- | ||
<s><|user|> What is AI?<|end|><|assistant|> AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. The goal | ||
``` |
68 changes: 68 additions & 0 deletions
68
python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3/generate.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,68 @@ | ||
# | ||
# Copyright 2016 The BigDL Authors. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
|
||
import torch | ||
import time | ||
import argparse | ||
|
||
from ipex_llm.transformers import AutoModelForCausalLM | ||
from transformers import AutoTokenizer | ||
|
||
# you could tune the prompt based on your own model, | ||
# here the prompt tuning refers to https://huggingface.co/microsoft/Phi-3-mini-4k-instruct#chat-format | ||
PHI3_PROMPT_FORMAT = "<|user|>\n{prompt}<|end|>\n<|assistant|>" | ||
|
||
if __name__ == '__main__': | ||
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for phi-3 model') | ||
parser.add_argument('--repo-id-or-model-path', type=str, default="microsoft/Phi-3-mini-4k-instruct", | ||
help='The huggingface repo id for the phi-3 model to be downloaded' | ||
', or the path to the huggingface checkpoint folder') | ||
parser.add_argument('--prompt', type=str, default="What is AI?", | ||
help='Prompt to infer') | ||
parser.add_argument('--n-predict', type=int, default=32, | ||
help='Max tokens to predict') | ||
|
||
args = parser.parse_args() | ||
model_path = args.repo_id_or_model_path | ||
|
||
# Load model in 4 bit, | ||
# which convert the relevant layers in the model into INT4 format | ||
model = AutoModelForCausalLM.from_pretrained(model_path, | ||
load_in_4bit=True, | ||
optimize_model=True, | ||
trust_remote_code=True, | ||
use_cache=True) | ||
|
||
# Load tokenizer | ||
tokenizer = AutoTokenizer.from_pretrained(model_path, | ||
trust_remote_code=True) | ||
|
||
# Generate predicted tokens | ||
with torch.inference_mode(): | ||
prompt = PHI3_PROMPT_FORMAT.format(prompt=args.prompt) | ||
input_ids = tokenizer.encode(prompt, return_tensors="pt") | ||
st = time.time() | ||
|
||
output = model.generate(input_ids, | ||
do_sample=False, | ||
max_new_tokens=args.n_predict) | ||
end = time.time() | ||
output_str = tokenizer.decode(output[0], skip_special_tokens=False) | ||
print(f'Inference time: {end-st} s') | ||
print('-'*20, 'Prompt', '-'*20) | ||
print(prompt) | ||
print('-'*20, 'Output', '-'*20) | ||
print(output_str) |
67 changes: 67 additions & 0 deletions
67
python/llm/example/CPU/PyTorch-Models/Model/phi-3/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,67 @@ | ||
# phi-3 | ||
|
||
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on phi-3 models. For illustration purposes, we utilize the [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) as a reference phi-3 model. | ||
|
||
> **Note**: If you want to download the Hugging Face *Transformers* model, please refer to [here](https://huggingface.co/docs/hub/models-downloading#using-git). | ||
> | ||
> IPEX-LLM optimizes the *Transformers* model in INT4 precision at runtime, and thus no explicit conversion is needed. | ||
## Requirements | ||
To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information. | ||
|
||
## Example: Predict Tokens using `generate()` API | ||
In the example [generate.py](./generate.py), we show a basic use case for a phi-3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. | ||
### 1. Install | ||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). | ||
|
||
After installing conda, create a Python environment for IPEX-LLM: | ||
```bash | ||
conda create -n llm python=3.11 # recommend to use Python 3.11 | ||
conda activate llm | ||
|
||
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option | ||
|
||
pip install transformers==4.37.0 | ||
``` | ||
|
||
### 2. Run | ||
After setting up the Python environment, you could run the example by following steps. | ||
|
||
#### 2.1 Client | ||
On client Windows machines, it is recommended to run directly with full utilization of all cores: | ||
```powershell | ||
python ./generate.py --prompt 'What is AI?' | ||
``` | ||
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section. | ||
|
||
#### 2.2 Server | ||
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket. | ||
|
||
E.g. on Linux, | ||
```bash | ||
# set IPEX-LLM env variables | ||
source ipex-llm-init | ||
|
||
# e.g. for a server with 48 cores per socket | ||
export OMP_NUM_THREADS=48 | ||
numactl -C 0-47 -m 0 python ./generate.py --prompt 'What is AI?' | ||
``` | ||
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section. | ||
|
||
#### 2.3 Arguments Info | ||
In the example, several arguments can be passed to satisfy your requirements: | ||
|
||
- `--repo-id-or-model-path`: str, argument defining the huggingface repo id for the phi-3 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'microsoft/Phi-3-mini-4k-instruct'`. | ||
- `--prompt`: str, argument defining the prompt to be inferred (with integrated prompt format for chat). It is default to be `What is AI?`. | ||
- `--n-predict`: int, argument defining the max number of tokens to predict. It is default to be `32`. | ||
|
||
#### 2.4 Sample Output | ||
#### [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) | ||
```log | ||
-------------------- Prompt -------------------- | ||
<|user|> | ||
What is AI?<|end|> | ||
<|assistant|> | ||
-------------------- Output -------------------- | ||
<s><|user|> What is AI?<|end|><|assistant|> AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. The goal | ||
``` |
70 changes: 70 additions & 0 deletions
70
python/llm/example/CPU/PyTorch-Models/Model/phi-3/generate.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,70 @@ | ||
# | ||
# Copyright 2016 The BigDL Authors. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
|
||
import torch | ||
import time | ||
import argparse | ||
|
||
from transformers import AutoTokenizer, AutoModelForCausalLM | ||
from ipex_llm import optimize_model | ||
|
||
# you could tune the prompt based on your own model, | ||
# here the prompt tuning refers to https://huggingface.co/microsoft/Phi-3-mini-4k-instruct#chat-format | ||
PHI3_PROMPT_FORMAT = "<|user|>\n{prompt}<|end|>\n<|assistant|>" | ||
|
||
if __name__ == '__main__': | ||
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for phi-3 model') | ||
parser.add_argument('--repo-id-or-model-path', type=str, default="microsoft/Phi-3-mini-4k-instruct", | ||
help='The huggingface repo id for the phi-3 model to be downloaded' | ||
', or the path to the huggingface checkpoint folder') | ||
parser.add_argument('--prompt', type=str, default="What is AI?", | ||
help='Prompt to infer') | ||
parser.add_argument('--n-predict', type=int, default=32, | ||
help='Max tokens to predict') | ||
|
||
args = parser.parse_args() | ||
model_path = args.repo_id_or_model_path | ||
|
||
# Load model | ||
model = AutoModelForCausalLM.from_pretrained(model_path, | ||
trust_remote_code=True, | ||
torch_dtype='auto', | ||
low_cpu_mem_usage=True, | ||
use_cache=True) | ||
|
||
# With only one line to enable IPEX-LLM optimization on model | ||
model = optimize_model(model) | ||
|
||
# Load tokenizer | ||
tokenizer = AutoTokenizer.from_pretrained(model_path, | ||
trust_remote_code=True) | ||
|
||
# Generate predicted tokens | ||
with torch.inference_mode(): | ||
prompt = PHI3_PROMPT_FORMAT.format(prompt=args.prompt) | ||
input_ids = tokenizer.encode(prompt, return_tensors="pt") | ||
st = time.time() | ||
|
||
output = model.generate(input_ids, | ||
do_sample=False, | ||
max_new_tokens=args.n_predict) | ||
end = time.time() | ||
output_str = tokenizer.decode(output[0], skip_special_tokens=False) | ||
print(f'Inference time: {end-st} s') | ||
print('-'*20, 'Prompt', '-'*20) | ||
print(prompt) | ||
print('-'*20, 'Output', '-'*20) | ||
print(output_str) |
Oops, something went wrong.