-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions about inference.py #71
Comments
Sorry for the late reply.
This file is available at
https://github.com/UMass-Foundation-Model/3D-LLM/blob/main/3DLLM_BLIP2-base/assets/objaverse_subset_ids_100.json
.
And I downloaded 3D-LLM from
https://github.com/UMass-Foundation-Model/3D-LLM by downloading the zip
file, and then objaverse_subset_ids_100.json is located at
/3DLLM_BLIP2-base/assets/objaverse_subset_ids_100.json.
zhangjie-tju ***@***.***> 於 2024年4月20日 週六 上午12:36寫道:
… May I ask where you downloaded objaverse_subset_ids_100.json?
—
Reply to this email directly, view it on GitHub
<#71 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BH53UFAF5VQHD3KKBXK3BQLY6FBZ5AVCNFSM6AAAAABGOLKVI6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRWHEYTSNBVGU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
thanks very much! I had found it. Have u pretrained it? I couldnt find all_questions.json for pretrained ,could you? |
I didn't pretrain this. I only ran inference.py following the Quick Start guide. |
python inference.py do we need to change the num_beams when running inference.py? I did not change the other code. |
v100 is okay.(3) |
Thanks for your interesting work.
When executing inference.py following Quick Start: Inference,I encountered torch.cuda.OutOfMemoryError.
(lavis) rsl@rsl:/media/rsl/NAS2T/CH/3D-LLM/3D-LLM-main/3DLLM_BLIP2-base$ python inference.py
Loading model from checkpoint...
Loading checkpoint shards: 100%|█████████████████| 2/2 [00:01<00:00, 1.34it/s]
Traceback (most recent call last):
File "inference.py", line 48, in
model.to(DEVICE)
File "/media/rsl/NAS2T/miniconda3/envs/lavis/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1152, in to
return self._apply(convert)
File "/media/rsl/NAS2T/miniconda3/envs/lavis/lib/python3.8/site-packages/torch/nn/modules/module.py", line 802, in _apply
module._apply(fn)
File "/media/rsl/NAS2T/miniconda3/envs/lavis/lib/python3.8/site-packages/torch/nn/modules/module.py", line 802, in _apply
module._apply(fn)
File "/media/rsl/NAS2T/miniconda3/envs/lavis/lib/python3.8/site-packages/torch/nn/modules/module.py", line 802, in _apply
module._apply(fn)
[Previous line repeated 5 more times]
File "/media/rsl/NAS2T/miniconda3/envs/lavis/lib/python3.8/site-packages/torch/nn/modules/module.py", line 825, in _apply
param_applied = fn(param)
File "/media/rsl/NAS2T/miniconda3/envs/lavis/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1150, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 MiB. GPU 0 has a total capacity of 10.75 GiB of which 48.12 MiB is free. Process 449627 has 1.80 GiB memory in use. Process 518182 has 1.62 GiB memory in use. Including non-PyTorch memory, this process has 7.00 GiB memory in use. Of the allocated memory 6.39 GiB is allocated by PyTorch, and 2.63 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
I tried
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
,but no effect.Then I attempted to use flan-t5-base and run inference.py,I encountered AttributeError.
(lavis) rsl@rsl:/media/rsl/NAS2T/CH/3D-LLM/3D-LLM-main/3DLLM_BLIP2-base$ python inference.py
Loading model from checkpoint...
Preparing input...
obj_id: 195b8b1576414997a6e1c6622ae72140
text_input: describe the 3d scene
Traceback (most recent call last):
File "inference.py", line 95, in
model_outputs = model.predict_answers(
File "/media/rsl/NAS2T/miniconda3/envs/lavis/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1688, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'T5ForConditionalGeneration' object has no attribute 'predict_answers'
If possible,could you inform me about the minimum hardware requirements for running inference.py?
The text was updated successfully, but these errors were encountered: