Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I fine tuning the downstream tasks on my own server? #85

Open
nanu23333 opened this issue Jan 3, 2025 · 2 comments
Open

How can I fine tuning the downstream tasks on my own server? #85

nanu23333 opened this issue Jan 3, 2025 · 2 comments

Comments

@nanu23333
Copy link

Hi,

Thanks for your great work! It truly provides biologists like me with a new perspective.

I am new to transformers and Hugging Face, and I have just started learning by following the official tutorials. I am very interested in fine-tuning the models on my own server with GPU support.

Specifically, I want to use the model to predict whether a series of DNA sequences are enhancers or not. However, I have a few questions:

  1. How can I load the train and test datasets provided for downstream tasks on Hugging Face? Should I preprocess or transform them before fine-tuning the model?

  2. Is there a way to run the training and inference code entirely on my own server rather than using Hugging Face's platform? Could you share example code for that?

Thank you so much for your help. Any guidance or suggestions would be greatly appreciated!

Best regards,

@a00101
Copy link

a00101 commented Jan 16, 2025

I’m also waiting for this answer.

@bguo018
Copy link

bguo018 commented Jan 28, 2025

Authors' response to reviewers' comments , attached with manuscript, provide codes and reference code links for fine tune and pretraining. The manuscript is a great contribution to the plant science community. I am very interested in zero shot learning section. but it seemed the authors did not provide math formulae and detailed reasoning. it is a little bit hard to understand why the authors do it. Any it is a great paper and is worth to follow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants