You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sure. You can integrate your own model to the framework following these steps.
Define a subclass of BaseChat for your own model
Implement the chat method to support multimodal and text-only inference
Set up the model-id and register it to the model registry via @registry.register_chatmodel()
If your model is fine-tuned from an existing model family like LLaVA, you can use the pre-defined LLaVAChat class (mmte/models/llava_chat.py) by changing the model-id and the corresponding config (e.g., model-path).
Your model can be tested as long as it can be correctly loaded in __init__ and supports multimodal and text-only inference in chat.
Dear authors, does your method support testing models that have been fine-tuned and saved locally? If so, how should I proceed with this?
The text was updated successfully, but these errors were encountered: