-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve discoverability + fix download stats on Hugging Face #31
Comments
Hello @NielsRogge, Thank you for your interest in our work. I have linked the dataset to the paper for now and plan to update the dataset card later, as I'm currently quite busy. As I am new to huggingface tools, could you assist me in integrating MiniGPT4_video so that it can be downloaded using the from_pretrained function and be included among the supported models by huggingface? If this is what you mean by a PR to integrate MiniGPT4_video as a library, it would be greatly appreciated, and I will ensure it gets merged. Please let me know if there's anything else I can do to help. |
Hi @KerolosAtef, Thanks for linking the dataset to the paper and uploading the model. I see that you pushed a commit to leverage the PyTorchModelHubMixin class. Now inference should work as follows: from minigpt4.models.mini_gpt4_llama_v2 import MiniGPT4_llama_v2
model = MiniGPT4_llama_v2.from_pretrained("Vision-CAIR/MiniGPT4-Video") Could you confirm that this works? In that case, we could update the model card to include this code snippet to showcase how to get started with the model, and update the demo code of the Space to also leverage Cheers! |
Hello @NielsRogge ,
I got this error : |
Hi, So it depends on how you want the model to have integration with the hub. cc @Wauplin Option 1If you want people to use the MiniGPT4-video code base with integration to the hub, then it's advised to do the following: from minigpt4.models.mini_gpt4_llama_v2 import MiniGPT4_llama_v2
model = MiniGPT4_llama_v2(...)
# equip with weights
model.load_state_dict(...)
# push to the hub
model.push_to_hub("...") then you should be able to reload it as: from minigpt4.models.mini_gpt4_llama_v2 import MiniGPT4_llama_v2
model = MiniGPT4_llama_v2.from_pretrained("Vision-CAIR/MiniGPT4-Video") Regarding the error that you have above, could you double check whether the config was properly serialized? https://huggingface.co/Vision-CAIR/MiniGPT4-Video/blob/main/config.json It might be there's an issue there when attempting to re-instantiate the model. Option 2In case you want people to use your model through the Transformers library (and make it usable through one of the Auto Classes like AutoModel), then you can follow this guide: https://huggingface.co/docs/transformers/custom_models. It requires registering your model and pushing the code to the hub. Let me know what you prefer! |
Hello @NielsRogge ,
I need to do option 2, but I got stuck after this step. I don't know how to publish the registration process.
note that |
Hi,
Niels here from the open-source team at Hugging Face. It's great to see you're releasing models + data on HF, I discovered your work through the paper page: https://huggingface.co/papers/2407.12679.
However there are a couple of things which could improve the discoverability of your models, and making sure the download stats work.
Dataset
The dataset itself could be linked to the paper, see here on how to do that: https://huggingface.co/docs/hub/en/datasets-cards#linking-a-paper
Download stats
I see that currently download stats aren't working for your models. This is due to the model repository containing various models, which do not contain a config.json file. See here for more info: https://huggingface.co/docs/hub/models-download-stats.
There are a few options here to make them work:
Usually we recommend to have a single repository per checkpoint.
Discoverability
Moreover, the discoverability of your models could be improved by adding tags to the model card here: https://huggingface.co/Vision-CAIR/MiniGPT4-Video, like "video-text-to-text" which helps people find the model when filtering hf.co/models.
Let me know if you need any help regarding this!
Cheers,
Niels
ML Engineer @ HF 🤗
The text was updated successfully, but these errors were encountered: