diff --git a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/README.md b/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/README.md deleted file mode 100644 index 16024a837bc..00000000000 --- a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/README.md +++ /dev/null @@ -1,99 +0,0 @@ -# Run HuggingFace `transformers` Models with Pipeline Optimization on Intel NPU - -In this directory, you will find examples on how to directly run HuggingFace `transformers` models with pipeline optimization on Intel NPUs. See the table blow for verified models. - -## Verified Models - -| Model | Model Link | -|------------|----------------------------------------------------------------| -| Llama2 | [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) | -| Llama3 | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | -| Llama3.2 | [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct), [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) | -| Qwen2 | [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) | -| Qwen2.5 | [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct), [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) | -| Baichuan2 | [baichuan-inc/Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan-7B-Chat) | -| MiniCPM | [openbmb/MiniCPM-1B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-1B-sft-bf16), [openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16) | - -## 0. Prerequisites -For `ipex-llm` NPU support, please refer to [Quick Start](../../../../../../../docs/mddocs/Quickstart/npu_quickstart.md#install-prerequisites) for details about the required preparations. - -## 1. Install & Runtime Configurations -### 1.1 Installation on Windows -We suggest using conda to manage environment: -```cmd -conda create -n llm python=3.11 -conda activate llm - -:: install ipex-llm with 'npu' option -pip install --pre --upgrade ipex-llm[npu] - -:: [optional] for Llama-3.2-1B-Instruct & Llama-3.2-3B-Instruct -pip install transformers==4.45.0 accelerate==0.33.0 -``` - -Please refer to [Quick Start](../../../../../../../docs/mddocs/Quickstart/npu_quickstart.md#install-ipex-llm-with-npu-support) for more details about `ipex-llm` installation on Intel NPU. - -### 1.2 Runtime Configurations -Please refer to [Quick Start](../../../../../../../docs/mddocs/Quickstart/npu_quickstart.md#runtime-configurations) for environment variables setting based on your device. - -## 2. Run Optimized Models -The examples below show how to run the **_optimized HuggingFace model implementations_** on Intel NPU: - -```cmd -:: to run Llama-2-7b-chat-hf -python llama2.py --repo-id-or-model-path "meta-llama/Llama-2-7b-chat-hf" --save-directory - -:: to run Meta-Llama-3-8B-Instruct -python llama3.py --repo-id-or-model-path "meta-llama/Meta-Llama-3-8B-Instruct" --save-directory - -:: to run Llama-3.2-1B-Instruct -python llama3.py --repo-id-or-model-path "meta-llama/Llama-3.2-1B-Instruct" --save-directory - -:: to run Llama-3.2-3B-Instruct -python llama3.py --repo-id-or-model-path "meta-llama/Llama-3.2-3B-Instruct" --save-directory - -:: to run Qwen2.5-7B-Instruct -python qwen.py --repo-id-or-model-path "Qwen/Qwen2.5-7B-Instruct" --save-directory - -:: to run Qwen2-1.5B-Instruct -python qwen.py --repo-id-or-model-path "Qwen/Qwen2-1.5B-Instruct" --low-bit sym_int8 --save-directory - -:: to run Qwen2.5-3B-Instruct -python qwen.py --repo-id-or-model-path "Qwen/Qwen2.5-3B-Instruct" --low-bit sym_int8 --save-directory - -:: to run Baichuan2-7B-Chat -python baichuan2.py --repo-id-or-model-path "baichuan-inc/Baichuan2-7B-Chat" --save-directory - -:: to run MiniCPM-1B-sft-bf16 -python minicpm.py --repo-id-or-model-path "openbmb/MiniCPM-1B-sft-bf16" --save-directory - -:: to run MiniCPM-2B-sft-bf16 -python minicpm.py --repo-id-or-model-path "openbmb/MiniCPM-2B-sft-bf16" --save-directory -``` - -Arguments info: -- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the model (e.g. `meta-llama/Llama-2-7b-chat-hf`) to be downloaded, or the path to the huggingface checkpoint folder. -- `--prompt PROMPT`: argument defining the prompt to be infered. It is default to be `What is AI?`. -- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`. -- `--max-context-len MAX_CONTEXT_LEN`: Defines the maximum sequence length for both input and output tokens. It is default to be `1024`. -- `--max-prompt-len MAX_PROMPT_LEN`: Defines the maximum number of tokens that the input prompt can contain. It is default to be `512`. -- `--disable-transpose-value-cache`: Disable the optimization of transposing value cache. -- `--disable-streaming`: Disable streaming mode of generation. -- `--save-directory SAVE_DIRECTORY`: argument defining the path to save converted model. If it is a non-existing path, the original pretrained model specified by `REPO_ID_OR_MODEL_PATH` will be loaded, otherwise the lowbit model in `SAVE_DIRECTORY` will be loaded. - -### Sample Output of Streaming Mode -#### [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) - -```log --------------------- Input -------------------- -input length: 28 -[INST] <> - -<> - -What is AI? [/INST] --------------------- Output -------------------- - AI (Artificial Intelligence) is a field of computer science and technology that focuses on the development of intelligent machines that can perform - -Inference time: xxxx s -``` diff --git a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/baichuan2.py b/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/baichuan2.py deleted file mode 100644 index 7c07cc93351..00000000000 --- a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/baichuan2.py +++ /dev/null @@ -1,120 +0,0 @@ -# -# Copyright 2016 The BigDL Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - - -import os -import torch -import time -import argparse -from ipex_llm.transformers.npu_model import AutoModelForCausalLM -from transformers import AutoTokenizer, TextStreamer -from transformers.utils import logging - -logger = logging.get_logger(__name__) - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Predict Tokens using `generate()` API for npu model" - ) - parser.add_argument( - "--repo-id-or-model-path", - type=str, - default="baichuan-inc/Baichuan2-7B-Chat", - help="The huggingface repo id for the Baichuan2 model to be downloaded" - ", or the path to the huggingface checkpoint folder", - ) - parser.add_argument('--prompt', type=str, default="What is AI?", - help='Prompt to infer') - parser.add_argument("--n-predict", type=int, default=32, help="Max tokens to predict") - parser.add_argument("--max-context-len", type=int, default=1024) - parser.add_argument("--max-prompt-len", type=int, default=512) - parser.add_argument("--quantization_group_size", type=int, default=0) - parser.add_argument("--disable-transpose-value-cache", action="store_true", default=False) - parser.add_argument("--disable-streaming", action="store_true", default=False) - parser.add_argument("--save-directory", type=str, - required=True, - help="The path of folder to save converted model, " - "If path not exists, lowbit model will be saved there. " - "Else, lowbit model will be loaded.", - ) - - args = parser.parse_args() - model_path = args.repo_id_or_model_path - - if not os.path.exists(args.save_directory): - model = AutoModelForCausalLM.from_pretrained(model_path, - optimize_model=True, - pipeline=True, - max_context_len=args.max_context_len, - max_prompt_len=args.max_prompt_len, - quantization_group_size=args.quantization_group_size, - torch_dtype=torch.float16, - attn_implementation="eager", - transpose_value_cache=not args.disable_transpose_value_cache, - trust_remote_code=True, - save_directory=args.save_directory) - tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) - tokenizer.save_pretrained(args.save_directory) - else: - model = AutoModelForCausalLM.load_low_bit( - args.save_directory, - attn_implementation="eager", - torch_dtype=torch.float16, - max_context_len=args.max_context_len, - max_prompt_len=args.max_prompt_len, - pipeline=True, - transpose_value_cache=not args.disable_transpose_value_cache, - trust_remote_code=True - ) - tokenizer = AutoTokenizer.from_pretrained(args.save_directory, trust_remote_code=True) - - - if args.disable_streaming: - streamer = None - else: - streamer = TextStreamer(tokenizer=tokenizer, skip_special_tokens=True) - - DEFAULT_SYSTEM_PROMPT = """\ - """ - - print("-" * 80) - print("done") - with torch.inference_mode(): - print("finish to load") - for i in range(3): - messages = [{"role": "system", "content": "You are a helpful assistant."}, - {"role": "user", "content": args.prompt}] - text = tokenizer.apply_chat_template(messages, - tokenize=False, - add_generation_prompt=True) - _input_ids = tokenizer([text], return_tensors="pt").input_ids - print("-" * 20, "Input", "-" * 20) - print("input length:", len(_input_ids[0])) - print(args.prompt) - print("-" * 20, "Output", "-" * 20) - st = time.time() - output = model.generate( - _input_ids, max_new_tokens=args.n_predict, streamer=streamer - ) - end = time.time() - if args.disable_streaming: - output_str = tokenizer.decode(output[0], skip_special_tokens=False) - print(output_str) - print(f"Inference time: {end-st} s") - - print("-" * 80) - print("done") - print("success shut down") diff --git a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/llama2.py b/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/llama2.py deleted file mode 100644 index d11b1891e35..00000000000 --- a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/llama2.py +++ /dev/null @@ -1,127 +0,0 @@ -# -# Copyright 2016 The BigDL Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - - -import os -import torch -import time -import argparse -from ipex_llm.transformers.npu_model import AutoModelForCausalLM -from transformers import AutoTokenizer, TextStreamer -from transformers.utils import logging - -logger = logging.get_logger(__name__) - -def get_prompt(message: str, chat_history: list[tuple[str, str]], - system_prompt: str) -> str: - texts = [f'[INST] <>\n{system_prompt}\n<>\n\n'] - # The first user input is _not_ stripped - do_strip = False - for user_input, response in chat_history: - user_input = user_input.strip() if do_strip else user_input - do_strip = True - texts.append(f'{user_input} [/INST] {response.strip()} [INST] ') - message = message.strip() if do_strip else message - texts.append(f'{message} [/INST]') - return ''.join(texts) - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Predict Tokens using `generate()` API for npu model" - ) - parser.add_argument( - "--repo-id-or-model-path", - type=str, - default="meta-llama/Llama-2-7b-chat-hf", - help="The huggingface repo id for the Llama2 model to be downloaded" - ", or the path to the huggingface checkpoint folder", - ) - parser.add_argument('--prompt', type=str, default="What is AI?", - help='Prompt to infer') - parser.add_argument("--n-predict", type=int, default=32, help="Max tokens to predict") - parser.add_argument("--max-context-len", type=int, default=1024) - parser.add_argument("--max-prompt-len", type=int, default=512) - parser.add_argument("--quantization_group_size", type=int, default=0) - parser.add_argument("--disable-transpose-value-cache", action="store_true", default=False) - parser.add_argument("--disable-streaming", action="store_true", default=False) - parser.add_argument("--save-directory", type=str, - required=True, - help="The path of folder to save converted model, " - "If path not exists, lowbit model will be saved there. " - "Else, lowbit model will be loaded.", - ) - - args = parser.parse_args() - model_path = args.repo_id_or_model_path - - if not os.path.exists(args.save_directory): - model = AutoModelForCausalLM.from_pretrained(model_path, - optimize_model=True, - pipeline=True, - max_context_len=args.max_context_len, - max_prompt_len=args.max_prompt_len, - quantization_group_size=args.quantization_group_size, - torch_dtype=torch.float16, - attn_implementation="eager", - transpose_value_cache=not args.disable_transpose_value_cache, - save_directory=args.save_directory) - tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) - tokenizer.save_pretrained(args.save_directory) - else: - model = AutoModelForCausalLM.load_low_bit( - args.save_directory, - attn_implementation="eager", - torch_dtype=torch.float16, - max_context_len=args.max_context_len, - max_prompt_len=args.max_prompt_len, - pipeline=True, - transpose_value_cache=not args.disable_transpose_value_cache, - ) - tokenizer = AutoTokenizer.from_pretrained(args.save_directory, trust_remote_code=True) - - - if args.disable_streaming: - streamer = None - else: - streamer = TextStreamer(tokenizer=tokenizer, skip_special_tokens=True) - - DEFAULT_SYSTEM_PROMPT = """\ - """ - - print("-" * 80) - print("done") - with torch.inference_mode(): - print("finish to load") - for i in range(3): - prompt = get_prompt(args.prompt, [], system_prompt=DEFAULT_SYSTEM_PROMPT) - _input_ids = tokenizer.encode(prompt, return_tensors="pt") - print("-" * 20, "Input", "-" * 20) - print("input length:", len(_input_ids[0])) - print(prompt) - print("-" * 20, "Output", "-" * 20) - st = time.time() - output = model.generate( - _input_ids, max_new_tokens=args.n_predict, streamer=streamer - ) - end = time.time() - if args.disable_streaming: - output_str = tokenizer.decode(output[0], skip_special_tokens=False) - print(output_str) - print(f"Inference time: {end-st} s") - - print("-" * 80) - print("done") - print("success shut down") diff --git a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/llama3.py b/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/llama3.py deleted file mode 100644 index baf923374af..00000000000 --- a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/llama3.py +++ /dev/null @@ -1,130 +0,0 @@ -# -# Copyright 2016 The BigDL Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - - -import os -import torch -import time -import argparse -from ipex_llm.transformers.npu_model import AutoModelForCausalLM -from transformers import AutoTokenizer, TextStreamer -from transformers.utils import logging - -logger = logging.get_logger(__name__) - -# you could tune the prompt based on your own model, -# here the prompt tuning refers to https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3 -DEFAULT_SYSTEM_PROMPT = """\ -""" - -def get_prompt(user_input: str, chat_history: list[tuple[str, str]], - system_prompt: str) -> str: - prompt_texts = [f'<|begin_of_text|>'] - - if system_prompt != '': - prompt_texts.append(f'<|start_header_id|>system<|end_header_id|>\n\n{system_prompt}<|eot_id|>') - - for history_input, history_response in chat_history: - prompt_texts.append(f'<|start_header_id|>user<|end_header_id|>\n\n{history_input.strip()}<|eot_id|>') - prompt_texts.append(f'<|start_header_id|>assistant<|end_header_id|>\n\n{history_response.strip()}<|eot_id|>') - - prompt_texts.append(f'<|start_header_id|>user<|end_header_id|>\n\n{user_input.strip()}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n') - return ''.join(prompt_texts) - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Predict Tokens using `generate()` API for npu model" - ) - parser.add_argument( - "--repo-id-or-model-path", - type=str, - default="meta-llama/Meta-Llama-3-8B-Instruct", - help="The huggingface repo id for the Llama3 model to be downloaded" - ", or the path to the huggingface checkpoint folder", - ) - parser.add_argument('--prompt', type=str, default="What is AI?", - help='Prompt to infer') - parser.add_argument("--n-predict", type=int, default=32, help="Max tokens to predict") - parser.add_argument("--max-context-len", type=int, default=1024) - parser.add_argument("--max-prompt-len", type=int, default=512) - parser.add_argument("--quantization_group_size", type=int, default=0) - parser.add_argument("--disable-transpose-value-cache", action="store_true", default=False) - parser.add_argument("--disable-streaming", action="store_true", default=False) - parser.add_argument("--save-directory", type=str, - required=True, - help="The path of folder to save converted model, " - "If path not exists, lowbit model will be saved there. " - "Else, lowbit model will be loaded.", - ) - - args = parser.parse_args() - model_path = args.repo_id_or_model_path - - if not os.path.exists(args.save_directory): - model = AutoModelForCausalLM.from_pretrained(model_path, - torch_dtype=torch.float16, - optimize_model=True, - pipeline=True, - max_context_len=args.max_context_len, - max_prompt_len=args.max_prompt_len, - quantization_group_size=args.quantization_group_size, - attn_implementation="eager", - transpose_value_cache=not args.disable_transpose_value_cache, - save_directory=args.save_directory) - tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) - tokenizer.save_pretrained(args.save_directory) - else: - model = AutoModelForCausalLM.load_low_bit( - args.save_directory, - attn_implementation="eager", - torch_dtype=torch.float16, - max_context_len=args.max_context_len, - max_prompt_len=args.max_prompt_len, - pipeline=True, - transpose_value_cache=not args.disable_transpose_value_cache, - ) - tokenizer = AutoTokenizer.from_pretrained(args.save_directory, trust_remote_code=True) - - - if args.disable_streaming: - streamer = None - else: - streamer = TextStreamer(tokenizer=tokenizer, skip_special_tokens=True) - - print("-" * 80) - print("done") - with torch.inference_mode(): - print("finish to load") - for i in range(3): - prompt = get_prompt(args.prompt, [], system_prompt=DEFAULT_SYSTEM_PROMPT) - _input_ids = tokenizer.encode(prompt, return_tensors="pt") - print("-" * 20, "Input", "-" * 20) - print("input length:", len(_input_ids[0])) - print(prompt) - print("-" * 20, "Output", "-" * 20) - st = time.time() - output = model.generate( - _input_ids, max_new_tokens=args.n_predict, streamer=streamer - ) - end = time.time() - if args.disable_streaming: - output_str = tokenizer.decode(output[0], skip_special_tokens=False) - print(output_str) - print(f"Inference time: {end-st} s") - - print("-" * 80) - print("done") - print("success shut down") diff --git a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/minicpm.py b/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/minicpm.py deleted file mode 100644 index fe2868c292b..00000000000 --- a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/minicpm.py +++ /dev/null @@ -1,113 +0,0 @@ -# -# Copyright 2016 The BigDL Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - - -import torch -import time -import argparse -from ipex_llm.transformers.npu_model import AutoModelForCausalLM -from transformers import AutoTokenizer, TextStreamer -from transformers.utils import logging -import os - -logger = logging.get_logger(__name__) - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Predict Tokens using `generate()` API for npu model" - ) - parser.add_argument( - "--repo-id-or-model-path", - type=str, - default="openbmb/MiniCPM-1B-sft-bf16", # or "openbmb/MiniCPM-2B-sft-bf16" - help="The huggingface repo id for the MiniCPM model to be downloaded" - ", or the path to the huggingface checkpoint folder", - ) - parser.add_argument('--prompt', type=str, default="What is AI?", - help='Prompt to infer') - parser.add_argument("--n-predict", type=int, default=32, help="Max tokens to predict") - parser.add_argument("--max-context-len", type=int, default=1024) - parser.add_argument("--max-prompt-len", type=int, default=512) - parser.add_argument("--quantization_group_size", type=int, default=0) - parser.add_argument("--disable-transpose-value-cache", action="store_true", default=False) - parser.add_argument("--disable-streaming", action="store_true", default=False) - parser.add_argument("--save-directory", type=str, - required=True, - help="The path of folder to save converted model, " - "If path not exists, lowbit model will be saved there. " - "Else, lowbit model will be loaded.", - ) - - args = parser.parse_args() - model_path = args.repo_id_or_model_path - - if not os.path.exists(args.save_directory): - model = AutoModelForCausalLM.from_pretrained(model_path, - optimize_model=True, - pipeline=True, - max_context_len=args.max_context_len, - max_prompt_len=args.max_prompt_len, - torch_dtype=torch.float16, - attn_implementation="eager", - quantization_group_size=args.quantization_group_size, - transpose_value_cache=not args.disable_transpose_value_cache, - trust_remote_code=True, - save_directory=args.save_directory) - tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) - tokenizer.save_pretrained(args.save_directory) - else: - model = AutoModelForCausalLM.load_low_bit( - args.save_directory, - attn_implementation="eager", - torch_dtype=torch.float16, - max_context_len=args.max_context_len, - max_prompt_len=args.max_prompt_len, - pipeline=True, - transpose_value_cache=not args.disable_transpose_value_cache, - trust_remote_code=True - ) - tokenizer = AutoTokenizer.from_pretrained(args.save_directory, trust_remote_code=True) - - - if args.disable_streaming: - streamer = None - else: - streamer = TextStreamer(tokenizer=tokenizer, skip_special_tokens=True) - - print("-" * 80) - print("done") - with torch.inference_mode(): - print("finish to load") - for i in range(3): - prompt = "<用户>{}".format(args.prompt) - _input_ids = tokenizer.encode(prompt, return_tensors="pt") - print("-" * 20, "Input", "-" * 20) - print("input length:", len(_input_ids[0])) - print(prompt) - print("-" * 20, "Output", "-" * 20) - st = time.time() - output = model.generate( - _input_ids, max_new_tokens=args.n_predict, streamer=streamer - ) - end = time.time() - if args.disable_streaming: - output_str = tokenizer.decode(output[0], skip_special_tokens=False) - print(output_str) - print(f"Inference time: {end-st} s") - - print("-" * 80) - print("done") - print("success shut down") diff --git a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/qwen.py b/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/qwen.py deleted file mode 100644 index ca0475c7c04..00000000000 --- a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/Pipeline-Models/qwen.py +++ /dev/null @@ -1,118 +0,0 @@ -# -# Copyright 2016 The BigDL Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - - -import os -import torch -import time -import argparse -from ipex_llm.transformers.npu_model import AutoModelForCausalLM -from transformers import AutoTokenizer, TextStreamer -from transformers.utils import logging - -logger = logging.get_logger(__name__) - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Predict Tokens using `generate()` API for npu model" - ) - parser.add_argument( - "--repo-id-or-model-path", - type=str, - default="Qwen/Qwen2.5-7B-Instruct", # Or Qwen2-7B-Instruct, Qwen2-1.5B-Instruct - help="The huggingface repo id for the Qwen model to be downloaded" - ", or the path to the huggingface checkpoint folder", - ) - parser.add_argument('--prompt', type=str, default="AI是什么?", - help='Prompt to infer') - parser.add_argument("--n-predict", type=int, default=32, help="Max tokens to predict") - parser.add_argument("--max-context-len", type=int, default=1024) - parser.add_argument("--max-prompt-len", type=int, default=512) - parser.add_argument("--quantization_group_size", type=int, default=0) - parser.add_argument('--low-bit', type=str, default="sym_int4", - help='Low bit precision to quantize the model') - parser.add_argument("--disable-transpose-value-cache", action="store_true", default=False) - parser.add_argument("--disable-streaming", action="store_true", default=False) - parser.add_argument("--save-directory", type=str, - required=True, - help="The path of folder to save converted model, " - "If path not exists, lowbit model will be saved there. " - "Else, lowbit model will be loaded.", - ) - - args = parser.parse_args() - model_path = args.repo_id_or_model_path - - if not os.path.exists(args.save_directory): - model = AutoModelForCausalLM.from_pretrained(model_path, - optimize_model=True, - pipeline=True, - load_in_low_bit=args.low_bit, - max_context_len=args.max_context_len, - max_prompt_len=args.max_prompt_len, - quantization_group_size=args.quantization_group_size, - torch_dtype=torch.float16, - attn_implementation="eager", - transpose_value_cache=not args.disable_transpose_value_cache, - trust_remote_code=True, - save_directory=args.save_directory) - tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) - tokenizer.save_pretrained(args.save_directory) - else: - model = AutoModelForCausalLM.load_low_bit( - args.save_directory, - attn_implementation="eager", - torch_dtype=torch.float16, - max_context_len=args.max_context_len, - max_prompt_len=args.max_prompt_len, - pipeline=True, - transpose_value_cache=not args.disable_transpose_value_cache) - tokenizer = AutoTokenizer.from_pretrained(args.save_directory, trust_remote_code=True) - - - if args.disable_streaming: - streamer = None - else: - streamer = TextStreamer(tokenizer=tokenizer, skip_special_tokens=True) - - print("-" * 80) - print("done") - messages = [{"role": "system", "content": "You are a helpful assistant."}, - {"role": "user", "content": args.prompt}] - text = tokenizer.apply_chat_template(messages, - tokenize=False, - add_generation_prompt=True) - with torch.inference_mode(): - print("finish to load") - for i in range(3): - _input_ids = tokenizer([text], return_tensors="pt").input_ids - print("-" * 20, "Input", "-" * 20) - print("input length:", len(_input_ids[0])) - print(text) - print("-" * 20, "Output", "-" * 20) - st = time.time() - output = model.generate( - _input_ids, max_new_tokens=args.n_predict, streamer=streamer - ) - end = time.time() - if args.disable_streaming: - output_str = tokenizer.decode(output[0], skip_special_tokens=False) - print(output_str) - print(f"Inference time: {end-st} s") - - print("-" * 80) - print("done") - print("success shut down") diff --git a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/README.md b/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/README.md index ec5791cb4e0..147b4877604 100644 --- a/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/README.md +++ b/python/llm/example/NPU/HF-Transformers-AutoModels/LLM/README.md @@ -48,7 +48,7 @@ Please refer to [Quick Start](../../../../../../docs/mddocs/Quickstart/npu_quick ### 1.2 Runtime Configurations Please refer to [Quick Start](../../../../../../docs/mddocs/Quickstart/npu_quickstart.md#runtime-configurations) for environment variables setting based on your device. -## 2. Run Optimized Models (Experimental) +## 2. Run Optimized Models The examples below show how to run the **_optimized HuggingFace model implementations_** on Intel NPU, including - [Llama2-7B](./llama2.py) - [Llama3-8B](./llama3.py)