Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include summary of the modeling outputs into the automation workflow #47

Open
ekatef opened this issue Jan 24, 2025 · 2 comments
Open

Comments

@ekatef
Copy link
Contributor

ekatef commented Jan 24, 2025

Currently, the workflow automation #45 is focused on inputs validation.

To facilitate status tracking for modeling runs it's also crucial to include summary of the simulation results.

A nice feature would be also to have a comparison with the modeling data.

@SermishaNarayana
Copy link
Collaborator

SermishaNarayana commented Jan 27, 2025

@ekatef

The rules don't perform only input validation but also summary of simulation results. The generation_comparison and installed_capacity rules are triggered after a model run and use the results of the model run to generate the plots.

Currently we are running only baseline models, so it appears to be input validation. But if we expand this to scenario runs, the same rules will be plotting the summary.

Can you please elaborate incase I am missing the point on what summary you would want to see?

@ekatef
Copy link
Contributor Author

ekatef commented Jan 27, 2025

Hey @SermishaNarayana, thanks for the inputs! Great to have a clear description of your doubts as it's quite a crucial point for understanding the goal of the whole modeling workflow.

The current procedure is focused on reproducing the initial results, and does it in a very nice way. It will be definitely very handy to prepare deliverables, and can be also used as a kind-of regression testing. But it wouldn't be a really good idea to use this functionality to track the modeling outputs of the full-scale runs due to limitations linked with the high specialisation of #45.

The major points for in-build output tracking:

  1. We need to examine outputs of the cross-sectoral model instead of focusing on the power-sector only which implies:

    • respecting a structure of the cross-sectoral model which is different from the power-only (here is an example on the way to calculate the generation and installed capacity for the cross-sectoral model);
    • adding sector-relevant metrics which are handy for numerical experiments
  2. The functionality should include comparing against different modeling implementations instead of focusing of using the reference statistics. The goal is usually to understand the effect of certain modeling parameters on the results.

  3. Hard-coding must be avoided: e.g. in-code custom definitions such as this one can easily lead to the confusion when if used to debug a cross-sectoral model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants