You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you very much for sharing the code. It really helped me understand the paper. Together with the full derivation of the likelihood in the paper it makes up for a wonderful learning resource for Hawkes Process and its application to social modeling.
However, by running your code I have noticed that the getTotalEvents function is highly sensitive to the precision of the numbers inserted as parameters K, beta, c theta. For example, the parameters we found at the end of the optimization (see in the jupyter notebook), namely K=1.000000, beta=1.015493, c=250.657531, theta=1.338108 produce:
total
216
nstar
0.922294642616219
a1
13.4134242589397
By only forcing the numbers to 3 decimal places, that is K=1.000, beta=1.015, c=250.657, theta=1.338 we get a terrible result:
total
68
nstar
0.468240281236122
a1
13.3638977367008
In other words, by rounding up the number we get a jump in the relative error reaching as high as 68.95%. So, do you have an explanation to this? Is this a property of the objective function or a numerical error in the code? Can you do something to mitigate this problem? Can you please comment on that?
Thank you very much.
Best regards.
The text was updated successfully, but these errors were encountered:
Hi,
First of all, thank you very much for sharing the code. It really helped me understand the paper. Together with the full derivation of the likelihood in the paper it makes up for a wonderful learning resource for Hawkes Process and its application to social modeling.
However, by running your code I have noticed that the
getTotalEvents
function is highly sensitive to the precision of the numbers inserted as parametersK, beta, c theta
. For example, the parameters we found at the end of the optimization (see in the jupyter notebook), namelyK=1.000000, beta=1.015493, c=250.657531, theta=1.338108
produce:By only forcing the numbers to 3 decimal places, that is
K=1.000, beta=1.015, c=250.657, theta=1.338
we get a terrible result:In other words,
by rounding up the number we get a jump in the relative error reaching as high as 68.95%
. So, do you have an explanation to this? Is this a property of the objective function or a numerical error in the code? Can you do something to mitigate this problem? Can you please comment on that?Thank you very much.
Best regards.
The text was updated successfully, but these errors were encountered: