You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> sudo docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai
/gaudi-docker/1.19.0/ubuntu22.04/habanalabs/pytorch-installer-2.5.1:latest
Unable to find image 'vault.habana.ai/gaudi-docker/1.19.0/ubuntu22.04/habanalabs/pytorch-installer-2.5.1:latest' locally
latest: Pulling from gaudi-docker/1.19.0/ubuntu22.04/habanalabs/pytorch-installer-2.5.1
6414378b6477: Pull complete
82f0cbb256c2: Pull complete
d50a5ec5680f: Pull complete
b84305bbd298: Pull complete
9b3008227e6e: Pull complete
bd6ead514a6f: Pull complete
...
> git clone https://github.com/HabanaAI/vllm-fork.git
Cloning into 'vllm-fork'...
remote: Enumerating objects: 61832, done.
remote: Counting objects: 100% (16/16), done.
remote: Compressing objects: 100% (15/15), done.
remote: Total 61832 (delta 6), reused 2 (delta 1), pack-reused 61816 (from 4)
Receiving objects: 100% (61832/61832), 56.17 MiB | 28.99 MiB/s, done.
Resolving deltas: 100% (48354/48354), done.
> cd vllm-fork/
> git checkout v0.6.4.post2+Gaudi-1.19.0
Note: switching to 'v0.6.4.post2+Gaudi-1.19.0'.
...
> pip install -r requirements-hpu.txt
Ignoring fastapi: markers 'python_version < "3.9"' don't match your environment
Ignoring six: markers 'python_version > "3.11"' don't match your environment
Ignoring setuptools: markers 'python_version > "3.11"' don't match your environment
...
Successfully built vllm-hpu-extension
Installing collected packages: triton, sentencepiece, pyairports, zipp, xxhash, websockets, vllm-hpu-extension, uvloop, tomli, tabulate, sniffio, safetensors, rpds-py, pyzm
q, python-dotenv, pydantic-core, pycountry, pyarrow, prometheus_client, pillow, partial-json-parser, numpy, nest-asyncio, msgspec, msgpack, llvmlite, lark, jiter, interegul
ar, httptools, h11, fsspec, exceptiongroup, einops, distro, diskcache, dill, cloudpickle, click, annotated-types, uvicorn, tiktoken, setuptools-scm, referencing, pydantic,
opencv-python-headless, numba, multiprocess, importlib_metadata, huggingface-hub, httpcore, gguf, anyio, watchfiles, tokenizers, starlette, lm-format-enforcer, jsonschema-s
pecifications, httpx, transformers, prometheus-fastapi-instrumentator, openai, jsonschema, fastapi, ray, mistral_common, datasets, compressed-tensors, outlines
Attempting uninstall: numpy
Found existing installation: numpy 1.23.5
Uninstalling numpy-1.23.5:
Successfully uninstalled numpy-1.23.5
Attempting uninstall: fsspec
Found existing installation: fsspec 2024.10.0
Uninstalling fsspec-2024.10.0:
Successfully uninstalled fsspec-2024.10.0
Attempting uninstall: pydantic
Found existing installation: pydantic 1.10.13
Uninstalling pydantic-1.10.13:
Successfully uninstalled pydantic-1.10.13
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflict
s.
neural-compressor-pt 3.2 requires numpy==1.23.5; python_version < "3.12", but you have numpy 1.26.4 which is incompatible.
neural-compressor-pt 3.2 requires pydantic==1.10.13, but you have pydantic 2.10.6 which is incompatible.
Suggest a potential alternative/fix
No response
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
📚 The doc issue
Following the current release instructions
Suggest a potential alternative/fix
No response
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: