Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation problem #22

Open
TShoreWind opened this issue Nov 17, 2024 · 3 comments
Open

Evaluation problem #22

TShoreWind opened this issue Nov 17, 2024 · 3 comments

Comments

@TShoreWind
Copy link

Hello author, how to evaluate the model code, this is very important to me, looking forward to the author's reply

@TShoreWind
Copy link
Author

{76FE5303-0E0E-4b69-A62D-AB13E019FB58}

@Yzmblog
Copy link
Owner

Yzmblog commented Nov 19, 2024

Hi, the evaluation codes are at https://github.com/Yzmblog/MonoHuman/blob/main/run.py#L193. Please first divide the dataset, and you can try to use this command:

python run.py \
    --type eval\
    --cfg configs/monohuman/zju_mocap/387/387.yaml \

Or you can generate the images first, and refer to the eval codes to calculate the metrics.

@TShoreWind
Copy link
Author

Thank you for taking time out of your busy schedule to reply, I don't know what is wrong or what, there is a mistake in the evaluation after dividing the data, can you share the detailed evaluation process?
{44EAF08C-407F-4062-84A1-449A407F88DF}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants