diff --git a/CHANGELOG.md b/CHANGELOG.md index 408738f..dd8133c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,7 @@ ## latest +- Add information about adaptivity tuning parameters https://github.com/precice/micro-manager/pull/131 - Put computation of counting active steps inside the adaptivity variant `if` condition https://github.com/precice/micro-manager/pull/130 ## v0.5.0 diff --git a/docs/configuration.md b/docs/configuration.md index e7fe0b7..aedbed1 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -116,11 +116,17 @@ Parameter | Description `type` | Set to either `local` or `global`. The type of adaptivity matters when the Micro Manager is run in parallel. `local` means comparing micro simulations within a local partitioned domain for similarity. `global` means comparing micro simulations from all partitions, so over the entire domain. `data` | List of names of data which are to be used to calculate if micro-simulations are similar or not. For example `["temperature", "porosity"]`. `history_param` | History parameter $$ \Lambda $$, set as $$ \Lambda >= 0 $$. -`coarsening_constant` | Coarsening constant $$ C_c $$, set as $$ C_c < 1 $$. -`refining_constant` | Refining constant $$ C_r $$, set as $$ C_r >= 0 $$. +`coarsening_constant` | Coarsening constant $$ C_c $$, set as $$ 0 =< C_c < 1 $$. +`refining_constant` | Refining constant $$ C_r $$, set as $$ 0 =< C_r < 1 $$. `every_implicit_iteration` | If True, adaptivity is calculated in every implicit iteration.
If False, adaptivity is calculated once at the start of the time window and then reused in every implicit time iteration. `similarity_measure`| Similarity measure to be used for adaptivity. Can be either `L1`, `L2`, `L1rel` or `L2rel`. By default, `L1` is used. The `rel` variants calculate the respective relative norms. This parameter is *optional*. +The primary tuning parameters for adaptivity are the history parameter $$ \Lambda $$, the coarsening constant $$ C_c $$, and the refining constant $$ C_r $$. Their effects can be interpreted as: + +- Higher values of the history parameter $$ \Lambda $$ imply lower significance of the similarity measures in the previous timestep on the similarity measure and thus adaptivity state in the current timestep. +- Higher values of the coarsening constant $$ C_c $$ imply that more active simulations from the previous timestep will remain active in the current timestep. +- Higher values of the refining constant $$ C_r $$ imply that less inactive points from the previous timestep will become active in the current timestep. + Example of adaptivity configuration is ```json