Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

transform_pmf_model_out() broken by new scoringutils #56

Closed
zkamvar opened this issue Sep 25, 2024 · 0 comments · Fixed by #57
Closed

transform_pmf_model_out() broken by new scoringutils #56

zkamvar opened this issue Sep 25, 2024 · 0 comments · Fixed by #57

Comments

@zkamvar
Copy link
Member

zkamvar commented Sep 25, 2024

I just created a repo that will run weekly integration tests and found an error in hubEvals where four tests are failing:

 ══ Failed tests ════════════════════════════════════════════════════════════════
  ── Error ('test-score_model_out.R:329:3'): score_model_out succeeds with valid inputs: nominal pmf output_type, default metrics, custom by ──
  Error in `scoringutils::as_forecast_nominal(data, forecast_unit = c("model", 
      task_id_cols), observed = "observation", predicted = "value", 
      model = "model", predicted_label = "output_type_id")`: unused argument (model = "model")
  Backtrace:1. └─hubEvals::score_model_out(...) at test-score_model_out.R:329:3
   2.   └─hubEvals:::transform_pmf_model_out(...) at hubEvals/R/score_model_out.R:96:3
  ── Error ('test-transform_pmf_model_out.R:16:3'): transform_pmf_model_out succeeds with valid inputs ──
  Error in `scoringutils::as_forecast_nominal(data, forecast_unit = c("model", 
      task_id_cols), observed = "observation", predicted = "value", 
      model = "model", predicted_label = "output_type_id")`: unused argument (model = "model")
  Backtrace:1. └─hubEvals:::transform_pmf_model_out(...) at test-transform_pmf_model_out.R:16:3
  ── Error ('test-transform_pmf_model_out.R:46:3'): transform_pmf_model_out doesn't depend on specific column names for task id variables ──
  Error in `scoringutils::as_forecast_nominal(data, forecast_unit = c("model", 
      task_id_cols), observed = "observation", predicted = "value", 
      model = "model", predicted_label = "output_type_id")`: unused argument (model = "model")
  Backtrace:

   1. └─hubEvals:::transform_pmf_model_out(...) at test-transform_pmf_model_out.R:46:3
  ── Error ('test-transform_pmf_model_out.R:59:3'): transform_pmf_model_out throws an error if model_out_tbl has no rows ──
  Error in `scoringutils::as_forecast_nominal(data, forecast_unit = c("model", 
      task_id_cols), observed = "observation", predicted = "value", 
      model = "model", predicted_label = "output_type_id")`: unused argument (model = "model")
  Backtrace:

   1. ├─testthat::expect_error(...) at test-transform_pmf_model_out.R:59:3
   2. │ └─testthat:::expect_condition_matching(...)
   3. │   └─testthat:::quasi_capture(...)
   4. │     ├─testthat (local) .capture(...)
   5. │     │ └─base::withCallingHandlers(...)
   6. │     └─rlang::eval_bare(quo_get_expr(.quo), quo_get_env(.quo))
   7. ├─base::suppressWarnings(...)
   8. │ └─base::withCallingHandlers(...)
   9. └─hubEvals:::transform_pmf_model_out(...)
  
  [ FAIL 4 | WARN 0 | SKIP 0 | PASS 64 ]

Source

The source of this is epiforecasts/scoringutils#915, where the model parameter was removed from as_forecast_nominal()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant