-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Include summary of the modeling outputs into the automation workflow #47
Comments
The rules don't perform only input validation but also summary of simulation results. The generation_comparison and installed_capacity rules are triggered after a model run and use the results of the model run to generate the plots. Currently we are running only baseline models, so it appears to be input validation. But if we expand this to scenario runs, the same rules will be plotting the summary. Can you please elaborate incase I am missing the point on what summary you would want to see? |
Hey @SermishaNarayana, thanks for the inputs! Great to have a clear description of your doubts as it's quite a crucial point for understanding the goal of the whole modeling workflow. The current procedure is focused on reproducing the initial results, and does it in a very nice way. It will be definitely very handy to prepare deliverables, and can be also used as a kind-of regression testing. But it wouldn't be a really good idea to use this functionality to track the modeling outputs of the full-scale runs due to limitations linked with the high specialisation of #45. The major points for in-build output tracking:
|
Currently, the workflow automation #45 is focused on inputs validation.
To facilitate status tracking for modeling runs it's also crucial to include summary of the simulation results.
A nice feature would be also to have a comparison with the modeling data.
The text was updated successfully, but these errors were encountered: