You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your great work! It truly provides biologists like me with a new perspective.
I am new to transformers and Hugging Face, and I have just started learning by following the official tutorials. I am very interested in fine-tuning the models on my own server with GPU support.
Specifically, I want to use the model to predict whether a series of DNA sequences are enhancers or not. However, I have a few questions:
How can I load the train and test datasets provided for downstream tasks on Hugging Face? Should I preprocess or transform them before fine-tuning the model?
Is there a way to run the training and inference code entirely on my own server rather than using Hugging Face's platform? Could you share example code for that?
Thank you so much for your help. Any guidance or suggestions would be greatly appreciated!
Best regards,
The text was updated successfully, but these errors were encountered:
Authors' response to reviewers' comments , attached with manuscript, provide codes and reference code links for fine tune and pretraining. The manuscript is a great contribution to the plant science community. I am very interested in zero shot learning section. but it seemed the authors did not provide math formulae and detailed reasoning. it is a little bit hard to understand why the authors do it. Any it is a great paper and is worth to follow.
Hi,
Thanks for your great work! It truly provides biologists like me with a new perspective.
I am new to transformers and Hugging Face, and I have just started learning by following the official tutorials. I am very interested in fine-tuning the models on my own server with GPU support.
Specifically, I want to use the model to predict whether a series of DNA sequences are enhancers or not. However, I have a few questions:
How can I load the train and test datasets provided for downstream tasks on Hugging Face? Should I preprocess or transform them before fine-tuning the model?
Is there a way to run the training and inference code entirely on my own server rather than using Hugging Face's platform? Could you share example code for that?
Thank you so much for your help. Any guidance or suggestions would be greatly appreciated!
Best regards,
The text was updated successfully, but these errors were encountered: