Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weichert algorithm truncates summations too early #6340

Open
nackerley opened this issue Dec 7, 2020 · 3 comments
Open

Weichert algorithm truncates summations too early #6340

nackerley opened this issue Dec 7, 2020 · 3 comments
Assignees

Comments

@nackerley
Copy link
Contributor

For catalogues with a narrow range of magnitudes (i.e. maximum observed magnitude less than one unit greater than minimum observed magnitude), Weichert.calculate() can give extremely poor fits. Attached is one example looking at catalogues from northern Ontario (NON), Canada, where in 2017 there were just 29 events between magnitude 2 and 3.

magnitude_recurrence_NON-2017_openquake

The problem is that the summations used in approximating the first and second derivatives of likelihood, dldb and d2ldb2, are truncated at the maximum magnitude in the catalogue (in this case just magnitude 3).

image
...
image

The issue of where to truncate these sums is not discussed in Weichert (1980). Inspection of the FORTRAN we have here at the Canadian Hazards Information Service, which is descended from Dieter Weichert's code, shows that the sums are taken to the maximum magnitude assumed in other calculations for the source zone in question, in this case magnitude 7. Doing so gives much more reasonable results:

magnitude_recurrence_NON-2017_betablast

Ultimately, the determination of where to truncate these summations should not be based on geology, but rather on the rate of convergence of the sums involved in estimating dldb and d2ldb2, which can be very slow for low b-value, and the desired tolerance on beta. It turns out that for NON-2017 dataset in question, summation to magnitude 8.4 would be "just enough" to achieve a tolerance of 1e-5 on beta:

truncation_error_2017

Truncating the summations at magnitude 8.4 is results in a b-value estimate of 1.02453 instead of 1.02445. The difference is miniscule, in this case, but could become significant for catalogues which have less events or lower b-value.

Currently weichert_test.py only addresses the case where the earthquakes span a long range of time and magnitudes. It should be possible to show that the current method fails for catalogues with a narrow range of magnitudes and/or a low beta, and make them pass by increasing the magnitude at which the summations are truncated.

I am preparing a pull request to address this issue.

@micheles
Copy link
Contributor

After 9 months, is there any news on this issue?

@nackerley
Copy link
Contributor Author

I do think this is an important issue to fix, and it sounds like there is interest, so I'll put this back on my to do list.

@micheles
Copy link
Contributor

micheles commented Sep 8, 2022

Still nothing after another year.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants