You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tried finetuning using script/custom/finetune_qlora.sh but when i load the model using inference it was not working. How to load the model using the weight finetuned using qlora Tried this code
import sys
sys.path.append('./')
from videollama2 import model_init, mm_infer
from videollama2.utils import disable_torch_init
def inference():
disable_torch_init()
# Video Inference
modal = 'video'
modal_path = 'assets/cat_and_chicken.mp4'
instruct = 'What animals are in the video, what are they doing, and how does the video feel?'
# Reply:
# The video features a kitten and a baby chick playing together. The kitten is seen laying on the floor while the baby chick hops around. The two animals interact playfully with each other, and the video has a cute and heartwarming feel to it.
# Image Inference
modal = 'image'
modal_path = 'assets/sora.png'
instruct = 'What is the woman wearing, what is she doing, and how does the image feel?'
# Reply:
# The woman in the image is wearing a black coat and sunglasses, and she is walking down a rain-soaked city street. The image feels vibrant and lively, with the bright city lights reflecting off the wet pavement, creating a visually appealing atmosphere. The woman's presence adds a sense of style and confidence to the scene, as she navigates the bustling urban environment.
model_path = 'DAMO-NLP-SG/VideoLLaMA2.1-7B-16F'
# Base model inference (only need to replace model_path)
# model_path = 'DAMO-NLP-SG/VideoLLaMA2.1-7B-16F-Base'
model, processor, tokenizer = model_init(model_path)
output = mm_infer(processor[modal](modal_path), instruct, model=model, tokenizer=tokenizer, do_sample=False, modal=modal)
print(output)
if __name__ == "__main__":
inference()
Where I replaced the location of the model _path with the lora path but it was not working.Also i tried saving the full model by used merge and unload peft function but when i load the model and run the above script it was giving error that mat1 o size 1336x3564 cannot be multiplies with mat2 of size 512x3564
The text was updated successfully, but these errors were encountered:
Hello,I'm a phD student from ZJU, I also use videollama2 to do my own research,we create a WeChat group to discuss some issues of videollama2,could you join us? Please contact me: WeChat number == LiangMeng19357260600, phone number == +86 19357260600,e-mail == [email protected].
Tried finetuning using script/custom/finetune_qlora.sh but when i load the model using inference it was not working. How to load the model using the weight finetuned using qlora Tried this code
Where I replaced the location of the model _path with the lora path but it was not working.Also i tried saving the full model by used merge and unload peft function but when i load the model and run the above script it was giving error that mat1 o size 1336x3564 cannot be multiplies with mat2 of size 512x3564
The text was updated successfully, but these errors were encountered: