You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We used the model on Huggingface, ps and ph, but there was a problem with the gradient, the gradient was very small, looking at the code we found that there was a problem with the initialisation of the model, referring to https://github.com/kuleshov-group/caduceus/issues/37, but still the gradient was very small.
I actually haven't done this sort of analysis myself for our models, so I am not sure if the grad norm ranges you're displaying are in line with my training. There does seem to be something weird going on with your loss curve though, I agree. Have you tried bigger batch size / playing around with the LR?
Did you use the most recent version of the the caduceus code ? check that it includes the fix here:
Also, in our experiments in this paper: we observed significant sensitivity to batch size, learning rate and initialization weights in the state space models. These are also sensitive to any changes in both sequence length and model depth . You can see a table of the results of our experiments in Supplementary Note 8.
My advice for doing your own training is to set up initial experiments using 4 random seeds, and a range of batch sizes and learning rates. Train for a short amount of time to verify that your gradient is behaving as you wish, and then use those hyperparameters for further training. This method worked for us as we continued further experiments.
We used the model on Huggingface, ps and ph, but there was a problem with the gradient, the gradient was very small, looking at the code we found that there was a problem with the initialisation of the model, referring to https://github.com/kuleshov-group/caduceus/issues/37, but still the gradient was very small.
dataset
my lauch.py
The text was updated successfully, but these errors were encountered: