You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a way I can run evaluate.py on a separate test set with trained models? It appears it doesn't support oid and a couple of other formats where train.py does.
The text was updated successfully, but these errors were encountered:
Those datasets, like OID, have been added later via community contributions. If you want to make a PR to add them to evaluate.py then it would be much appreciated.
Is there a way I can run
evaluate.py
on a separate test set with trained models? It appears it doesn't support oid and a couple of other formats wheretrain.py
does.The text was updated successfully, but these errors were encountered: