You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We now have the ability to save models from #268. There are users that may want to do a quick inference off of a previously saved model, (rather than retraining). This would reduce the time to get new drafts from 8 hours to under 15 minutes (5 minutes?). This would require:
API design in Serval to indicate "just inference, don't retrain"
Scripture Forge work to choose "just inference, don't retrain"
Machine.py updates to enable referencing the previously saved model
API design
The Nmt engine is marked for persistence.
For build options:
overloard parent_model_name so that if it is set to saved
johnml1135
changed the title
Inference off of previously trained (and saved) model
Batch Inference off of previously trained (and saved) model
Mar 26, 2024
We now have the ability to save models from #268. There are users that may want to do a quick inference off of a previously saved model, (rather than retraining). This would reduce the time to get new drafts from 8 hours to under 15 minutes (5 minutes?). This would require:
API design
parent_model_name
so that if it is set tosaved
train_params.do_train
to false.SF design:
Should it be auto-magical?
The text was updated successfully, but these errors were encountered: