Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

To do #87

Open
3 of 11 tasks
kathsherratt opened this issue Jan 18, 2021 · 1 comment
Open
3 of 11 tasks

To do #87

kathsherratt opened this issue Jan 18, 2021 · 1 comment

Comments

@kathsherratt
Copy link
Contributor

kathsherratt commented Jan 18, 2021

@seabbs put together a to do list - adding a copy here for reference/easier access.

Direct copy of text but I have reorganised into chunks.

Maintain week-to-week code

  • Review current implementation and bug/sense check (i.e for things like should the US = sum(states))
  • Add any additional automated checks that make sense to prevent PA like issues occurring (ideally identifying what the issue was in PA originally).
  • Automate submission to the US hub (leaving manual work to be a graph check).

Add new features

  • Potentially add a true NULL model into the mix. (see Individual model checks & null model #86)
  • Extend ensembling grid to be by horizon and by state (and ideally both of those) and fix the stability issue that makes using findings hard.
  • Score the individual models, plus mean, median and a chosen QRA ensemble (see below) and the hub ensemble. Scoring should be both exhaustive (i.e @nikosbosse summary report) and summarised into a few figures for a paper.
  • Score ensemble grid and explore what combinations work/don't work. For models that work well (if any) highlight weightings of individual models and potentially explore over time. Again exhaustive + a few summary figures.

Paper framework

  • Introduction (background, detail, what are we trying to do (my take simple single epi concepts + ensembling vs building a big hard to interpret model).
  • Methods: (data, section per model, ensembling section, scoring section, evaluation).
  • Results: Individual model evaluation above, Ensemble evaluation above, deep dive into areas/times/types of bad performance.
  • Discussion: set up sections and bullet.
@nikosbosse
Copy link
Contributor

added a NULL model (with increasing uncertainty). Not sure it is the best we can do, but it's honest work :) #88

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants