Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initialise patch state #304

Open
wants to merge 28 commits into
base: develop
Choose a base branch
from
Open

Initialise patch state #304

wants to merge 28 commits into from

Conversation

aornugent
Copy link
Collaborator

@aornugent aornugent commented Jun 11, 2021

Early progress for solving patches with arbitrary initial conditions. The included test shows how this has been added to run_scm_collect which may or may not the desired API design, but useful for now. It's also hard coded for the single species case which will need to be updated/

Next steps will be adapting the default cohort introduction schedule to something smarter and then thinking about how to use build_schedule to see if we can arrive at the same solution as starting a patch from scratch.

@aornugent aornugent requested a review from dfalster June 11, 2021 05:04
@aornugent
Copy link
Collaborator Author

aornugent commented Jun 28, 2021

This now works for multiple species with varying numbers of introduced cohorts, as shown in the test.

The required state data structure is a little weird, so perhaps can be improved if we're only ever likely to initialise one variable (e.g. height).

I've also removed the env variable from make_scm so that it is initialised automatically. Some other tests suggest we sometimes want to recreate a patch exactly, so make_patch still has the option to pass in an env object. Both use the same underlying Patch.r_set_state function, but SCM.r_set_state always passes in an empty env vector.

@aornugent
Copy link
Collaborator Author

aornugent commented Jul 5, 2021

Cohorts introduced at t = 0 are now included in build_schedule but more care needs to be taken in split_state (perhaps a better name would be split_densities) to correctly resolve the initial size distribution. The code to regenerate these plots is on L136

test_that("Build schedule works", {

Before
image

After
image

@aornugent aornugent marked this pull request as ready for review July 5, 2021 11:38
@devmitch devmitch mentioned this pull request Nov 15, 2021
@aornugent
Copy link
Collaborator Author

aornugent commented Jan 20, 2022

@devmitch - could you please take a look at this? As an example, this should initialise two new cohorts (n_init = 2) and set their state from a vector in Parameters. This is handled on the R side in build_schedule.

devtools::load_all()

env <- make_environment("FF16")
ctrl <- scm_base_control()
p0 <- scm_base_parameters("FF16")

p1 <- expand_parameters(trait_matrix(0.0825, "lma"), p0, FF16_hyperpar,FALSE)

# manual construction (see also: scm_state)
init <- matrix(c(10, 0, 0, 0, 0, 0, 1,
                 0.3920458, 0, 0, 0, 0, 0, -4), ncol = 2)

rownames(init) <- c("height", "mortality", "fecundity", "area_heartwood",
                    "mass_heartwood", "offspring_produced_survival_weighted",
                    "log_density")

# interpolate initial size distribution
splines <- init_spline(list(init), size_idx = 1)

res <- build_schedule(p1, env, ctrl, splines, n_init = 2)

On the C++ side, we can initialize an Patch and set the cohort states at time = 0, but the SCM crashes taking the first step. The latest commit introduces some debugging statements that produce the output presented below.


Initialise patch

-- RESETTING PATCH --
Before update:
Initial states: 

Initial rates: 


-- SETTING INITIAL STATE --
Cohort states before compute_rates: 
10 0 0 0 0 0 1 
0.392046 0 0 0 0 0 -4 

Cohort rates before compute_rates: 
nan nan nan nan nan 0 0 
nan nan nan nan nan 0 0 

Cohort states after compute_rates: 
10 0 0 0 0 0 1 
0.392046 0 0 0 0 0 -4 

Cohort rates after compute_rates: 
0 2.98561e+14 0 0 0 0 -2.98561e+14 
0 1.18285e+11 0 0 0 0 -1.18285e+11 

Cohort states after compute_environent and compute_rates (again... ): 
10 0 0 0 0 0 1 
0.392046 0 0 0 0 0 -4 

Cohort rates after compute_environent and compute_rates (again...): 
0 9.09493e+13 0 0 0 0 -9.09493e+13 
0 1.18285e+11 0 0 0 0 -1.18285e+11 

These vectors represent two cohorts, each with seven variables. Cohort state is correctly initialised to (10, 0, 0, 0, 0, 0, 1) and (0.392046, 0, 0, 0, 0, 0, -4).

Cohort rates, however, are initialised as NaN and then explode after compute_rates. Strangely, this only affects fecundity and log_density, even though fecundity state was set to zero The rates of the taller cohort change slightly after updating the environment to reflect the new state of these two cohorts. (note: I have tested with N > 2; omitted for clarity)


Start solver

This runs for the first few steps of the SCM solver but crashes shortly thereafter:

SCM TIME: 0
Initial cohorts already introduced

Set solver state from patch
Advancing using schedule times
Computing derivatives

STEPPING: 0
Derivs. at solver time: 2e-07
Cohort states before compute_rates: 
10 1.81899e+07 0 0 0 0 -1.81899e+07 
0.392046 23656.9 0 0 0 0 -23660.9 

Cohort rates before compute_rates: 
0 9.09493e+13 0 0 0 0 -9.09493e+13 
0 1.18285e+11 0 0 0 0 -1.18285e+11 

Cohort states after compute_rates: 
10 1.81899e+07 0 0 0 0 -1.81899e+07 
0.392046 23656.9 0 0 0 0 -23660.9 

Cohort rates after compute_rates: 
0.854499 0.0101241 6.02887e-05 0.000313229 1.68762 0 0.0541551 
0.521019 0.01 7.36104e-22 7.92362e-09 1.67368e-06 0 -0.900367 

Derivs. at solver time: 3e-07
Cohort states before compute_rates: 
10 6.8212e+06 1.3565e-11 7.04765e-11 3.79715e-07 0 -6.8212e+06 
0.392046 8871.35 1.65623e-28 1.78281e-15 3.76578e-13 0 -8875.35 

Cohort rates before compute_rates: 
0.854499 0.0101241 6.02887e-05 0.000313229 1.68762 0 0.0541551 
0.521019 0.01 7.36104e-22 7.92362e-09 1.67368e-06 0 -0.900367 

Cohort states after compute_rates: 
10 6.8212e+06 1.3565e-11 7.04765e-11 3.79715e-07 0 -6.8212e+06
 0.392046 8871.35 1.65623e-28 1.78281e-15 3.76578e-13 0 -8875.35 

Cohort rates after compute_rates: 
0.854499 0.0101241 6.02888e-05 0.000313229 1.68762 0 0.0541551 
0.521019 0.01 7.36105e-22 7.92362e-09 1.67368e-06 0 -0.900366 

Derivs. at solver time: 6e-07
Cohort states before compute_rates: 
10 2.72848e+07 1.80867e-11 9.39687e-11 5.06286e-07 0 -2.72848e+07 
0.392046 35485.4 2.20832e-28 2.37709e-15 5.02107e-13 0 -35489.4 

Cohort rates before compute_rates: 
0.854499 0.0101241 6.02888e-05 0.000313229 1.68762 0 0.0541551 
0.521019 0.01 7.36105e-22 7.92362e-09 1.67368e-06 0 -0.900366 

Cohort states after compute_rates: 
10 2.72848e+07 1.80867e-11 9.39687e-11 5.06286e-07 0 -2.72848e+07 
0.392046 35485.4 2.20832e-28 2.37709e-15 5.02107e-13 0 -35489.4 

Cohort rates after compute_rates: 
0.854499 0.0101241 6.02888e-05 0.000313229 1.68762 0 0.0541551 
0.521019 0.01 7.36105e-22 7.92363e-09 1.67368e-06 0 -0.900366 

Derivs. at solver time: 1e-06 ...

>  Error in SCM___FF16__FF16_Env__run_next(self) : 
  Detected non-finite contribution 

One avenue of investigation will be the initialisation of rates, but I'm very keen to hear what you think might be causing this issue.

@devmitch
Copy link
Contributor

devmitch commented Jan 23, 2022

Cohort rates, however, are initialised as NaN

This can be fixed by changing NA_REAL to 0.0 on the following line:

rates.resize(new_size, NA_REAL);

Though this doesn't stop the explosion of the rates, so I think the two issues are unrelated. NA_REAL seems to be some constant defined in RcppCommon.h, though I'm not sure what its use is here.

As for the exploding rates,

Strangely, this only affects fecundity and log_density, even though fecundity state was set to zero

I believe the affected rates (initially) are mortality and log_density. I confirmed this with print statements in ff16_strategy.cpp:

vars.set_rate(MORTALITY_INDEX,
        mortality_dt(net_mass_production_dt_ / area_leaf_, vars.state(MORTALITY_INDEX)));
std::cout << "mortality = " << mortality_dt(net_mass_production_dt_ / area_leaf_, vars.state(MORTALITY_INDEX)) << "\n";

This would explain why log_density also blows up since it is a result of mortality:

log_density_dt =
- growth_rate_gradient(environment)
- plant.rate("mortality");

As growth_rate_gradient(environment) = 0, log_density is indeed just the negation of mortality, as seen in the print statements.

Mortality is set in this way outside of the if (net_mass_production_dt_ > 0) condition in ff16_strategy.cpp:

if (net_mass_production_dt_ > 0) {
  // ...
} else {
  // ...
}
vars.set_rate(MORTALITY_INDEX,
      mortality_dt(net_mass_production_dt_ / area_leaf_, vars.state(MORTALITY_INDEX)));

The other rates dependent on net_mass_production_dt_ > 0 are all computed inside this condition, and set to 0 if the condition isn't met. I'm not sure if this is correct or if mortality is different to the other rates, but intuitively it should be:

if (net_mass_production_dt_ > 0) {
  // ...
  vars.set_rate(MORTALITY_INDEX,
      mortality_dt(net_mass_production_dt_ / area_leaf_, vars.state(MORTALITY_INDEX)));
} else {
  // ...
  vars.set_rate(MORTALITY_INDEX, 0.0);
}

Running the script now, we get further than before, but stopped at a different error:

Cohort states after compute_rates: 
10 0 0 0 0 0 1 0.392046 0 0 0 0 0 -4 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 0.392046 inf 0 0 0 0 -inf 
Cohort rates after compute_rates: 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 

FINISHED STEP

Solver finished: 104

Error in SCM___FF16__FF16_Env__net_reproduction_ratio_errors__get(self) : 
  Incorrect length input; expected 141, received 142

It still doesn't look correct with the infs in the states or the zeroes in the rates.

@aornugent
Copy link
Collaborator Author

Well spotted, thanks @devmitch. There's even this helpful comment in FF16_Strategy::mortality_dt

// NOTE: When plants are extremely inviable, the rate of change in
// mortality can be Inf, because net production is negative, leaf
// area is small and so we get exp(big number).  However, most of
// the time that happens we should get infinite mortality variable
// levels and the rate of change won't matter.  It is possible that
// we will need to trim this to some large finite value, but for
// now, just checking that the actual mortality rate is finite.

With the dummy input data (above), net_mass_production_dt / leaf_area = [-1.521827, -1.18958] (for both cohorts). This looks to be largely due to the large initial density of the first cohort - there are an unrealistic number of large 10m individuals and so the first step of the model undergoes intense thinning.

In mortality_growth_dependent_dt we see this problem reflected in equation 20 with parameter a_dG2 = 20 exacerbating the issue.

dG = 5.5 * exp(-20 * -1.5218) = 9.0897e+13

The FF16 model runs without any changes if I lower a_dG2 or the initial density of the first cohort. @dfalster - perhaps the best solution then is to catch the error and suggest to a user that their initial state is too far from plausible values?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[pre-existing-plants] Ability to start patch with pre-existing plants
2 participants