Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding testing my own model #4

Open
ZixianGao opened this issue Aug 20, 2024 · 1 comment
Open

Regarding testing my own model #4

ZixianGao opened this issue Aug 20, 2024 · 1 comment

Comments

@ZixianGao
Copy link

Dear authors, does your method support testing models that have been fine-tuned and saved locally? If so, how should I proceed with this?

@zycheiheihei
Copy link
Collaborator

zycheiheihei commented Aug 22, 2024

Sure. You can integrate your own model to the framework following these steps.

  1. Define a subclass of BaseChat for your own model
  2. Implement the chat method to support multimodal and text-only inference
  3. Set up the model-id and register it to the model registry via @registry.register_chatmodel()

If your model is fine-tuned from an existing model family like LLaVA, you can use the pre-defined LLaVAChat class (mmte/models/llava_chat.py) by changing the model-id and the corresponding config (e.g., model-path).

Your model can be tested as long as it can be correctly loaded in __init__ and supports multimodal and text-only inference in chat.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants