-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
not able to build directory using build.py #3
Comments
I think you can try setting it to an empty dictionary like: if you check LoraConfig class you can notice from_hf actually called init function and this argument default value is a empty dictionary. |
You need use the tensorrt-llm==0.7.1 |
After setting an empty dict and running build.sh getting (trtllm) vishwajeet@vishwa:~/Desktop/MYGPT/trt-llm-rag-linux$ bash build-llama.sh |
same things I met,so you can try to install tensorrt-llm==0.7.1 |
@sugar5727 Downgraded to tensorrt-llm==0.7.1 and now I am not facing those issues and I have RTX4060 Laptop 8gb when I run build-llama.sh it starts but gets killed (trtllm) vishwajeet@vishwa:~/Desktop/MYGPT/trt-llm-rag-linux$ bash build-llama.sh |
sry, I dont face it before |
@sugar5727 which gpu you have? |
RTX 4090 |
pip uninstall tensorrt_llm then re-install pip3 install tensorrt_llm==0.7.1 -U --pre --extra-index-url https://pypi.nvidia.com --log=debug.txt |
new error: (trtllm) anil@anil-gpu2:/media/anil/New Volume/nihal/mlr_chat$ python3 app.py (trtllm) anil@anil-gpu2:/media/anil/New Volume/nihal/mlr_chat$ conda list packages in environment at /home/anil/miniconda3/envs/trtllm:Name Version Build Channel_libgcc_mutex 0.1 main suggest me the solution...... |
@sugar5727 |
(mlr_chat) anil@anil-gpu2:/media/anil/New Volume/nihal/mlr_chat$ ./build-mistral.sh
You are using a model of type mistral to instantiate a model of type llama. This is not supported for all configurations of models and can yield errors.
[TensorRT-LLM] TensorRT-LLM version: 0.8.0Traceback (most recent call last):
File "/media/anil/New Volume/nihal/mlr_chat/build.py", line 895, in
args = parse_arguments()
File "/media/anil/New Volume/nihal/mlr_chat/build.py", line 549, in parse_arguments
lora_config = LoraConfig.from_hf(args.hf_lora_dir,
TypeError: LoraConfig.from_hf() missing 1 required positional argument: 'trtllm_modules_to_hf_modules'
(mlr_chat) anil@anil-gpu2:/media/anil/New Volume/nihal/mlr_chat$ ./build-llama.sh
[TensorRT-LLM] TensorRT-LLM version: 0.8.0Traceback (most recent call last):
File "/media/anil/New Volume/nihal/mlr_chat/build.py", line 895, in
args = parse_arguments()
File "/media/anil/New Volume/nihal/mlr_chat/build.py", line 549, in parse_arguments
lora_config = LoraConfig.from_hf(args.hf_lora_dir,
TypeError: LoraConfig.from_hf() missing 1 required positional argument: 'trtllm_modules_to_hf_modules'
The text was updated successfully, but these errors were encountered: