Skip to content

Commit

Permalink
fix
Browse files Browse the repository at this point in the history
  • Loading branch information
plusbang committed Jul 8, 2024
1 parent aaf0045 commit c4b96d4
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions python/llm/example/GPU/HuggingFace/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Running HuggingFace `transformers` model using IPEX-LLM on Intel GPU
# Running HuggingFace models using IPEX-LLM on Intel GPU

This folder contains examples of running any HuggingFace `transformers` model on IPEX-LLM:
This folder contains examples of running any HuggingFace model on IPEX-LLM:

- [LLM](LLM): examples of running large language models (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) using IPEX-LLM optimizations
- [Multimodal](Multimodal): examples of running large multimodal models (StableDiffusion models, Qwen-VL-Chat, glm-4v, etc.) using IPEX-LLM optimizations
Expand Down
2 changes: 1 addition & 1 deletion python/llm/example/GPU/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
This folder contains examples of running IPEX-LLM on Intel GPU:

- [Applications](Applications): running LLM applications (such as autogen) on IPEX-LLM
- [HuggingFace](HuggingFace): running any ***Hugging Face Transformers*** model on IPEX-LLM (using the standard AutoModel APIs), including language models and multimodal models.
- [HuggingFace](HuggingFace): running ***HuggingFace*** models on IPEX-LLM (using the standard AutoModel APIs), including language models and multimodal models.
- [LLM-Finetuning](LLM-Finetuning): running ***finetuning*** (such as LoRA, QLoRA, QA-LoRA, etc) using IPEX-LLM on Intel GPUs
- [vLLM-Serving](vLLM-Serving): running ***vLLM*** serving framework on intel GPUs (with IPEX-LLM low-bit optimized models)
- [Deepspeed-AutoTP](Deepspeed-AutoTP): running distributed inference using ***DeepSpeed AutoTP*** (with IPEX-LLM low-bit optimized models) on Intel GPUs
Expand Down

0 comments on commit c4b96d4

Please sign in to comment.