Skip to content

Commit

Permalink
Fix vLLM-v2 install instructions(#10822)
Browse files Browse the repository at this point in the history
  • Loading branch information
gc-fu authored Apr 22, 2024
1 parent 3cd21d5 commit 61c67af
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions python/llm/example/GPU/vLLM-Serving/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,8 @@ sycl-ls
To run vLLM continuous batching on Intel GPUs, install the dependencies as follows:

```bash
# This directory may change depends on where you install oneAPI-basekit
source /opt/intel/oneapi/setvars.sh
# First create an conda environment
conda create -n ipex-vllm python=3.11
conda activate ipex-vllm
Expand Down

0 comments on commit 61c67af

Please sign in to comment.