-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reproducing retweet results #49
Comments
Hi, we have fixed quite a few issues recently in main branch. We will rerun the experiments and get back to you shortly. |
Hey any update? Thanks. |
Hi, I have upload a script to run the training on Retweet: https://github.com/ant-research/EasyTemporalPointProcess/blob/main/examples/train_experiment/run_retweet.py The problem here is the prediction on time becomes even worse. The dataset is in fact problematic because of the dt distribution: it has the some concurrent event while the max_dt is extremely large. We are working on some further improvement both on the code / data. |
A more detailed analysis can be done following the notebook https://github.com/ant-research/EasyTemporalPointProcess/blob/main/notebooks/easytpp_1_dataset.ipynb |
Hi, we have been trying to reproduce the paper's result on THP and NHP models. The predictions we get are quite far from the ground truth labels, specifically for the next event time:
labels:
array([[191.000000, 42.000000, 609.000000, ..., 718.000000, 11668.000000,
8999.000000],
[1837.000000, 8.000000, 1060.000000, ..., 1087.000000,
6687.000000, 2488.000000],
[1239.000000, 5326.000000, 1958.000000, ..., 59971.000000,
32630.000000, 38632.000000],
...,
[1.000000, 14.000000, 63.000000, ..., 457.000000, 252.000000,
1397.000000],
[313.000000, 4136.000000, 4994.000000, ..., 141400.000000,
25386.000000, 65401.000000],
[519.000000, 590.000000, 405.000000, ..., 62900.000000,
28759.000000, 67504.000000]])
we get;
NHP predicted:
array([[191.000000, 0.842482, 0.987651, ..., 0.785294, 0.987130,
0.969165],
[1837.000000, 0.116420, 0.131135, ..., 0.044110, 0.146753,
0.044110],
[1239.000000, 0.171356, 0.993292, ..., 0.129321, 0.171356,
0.171356],
...,
[1.000000, 0.024143, 0.032704, ..., 0.502413, 0.028608, 0.028608],
[313.000000, 0.956142, 0.142735, ..., 0.933639, 0.956142,
0.142735],
[519.000000, 0.655705, 0.117873, ..., 0.132342, 0.116001,
0.116318]])
THP predicted:
array([[191.000000, 5.000000, 5.000000, ..., 5.000000, 5.000000,
5.000000],
[1837.000000, 5.000000, 5.000000, ..., 5.000000, 5.000000,
5.000000],
[1239.000000, 5.000000, 5.000000, ..., 5.000000, 5.000000,
5.000000],
...,
[1.000000, 5.000000, 5.000000, ..., 5.000000, 5.000000, 5.000000],
[313.000000, 5.000000, 5.000000, ..., 5.000000, 5.000000,
5.000000],
[519.000000, 5.000000, 5.000000, ..., 5.000000, 5.000000,
5.000000]])
We are using the example config, changing shuffle to true and increasing model hidden size and changing lr were not particularly helpful, increasing max_dtime lead to model predicting max_dtime at every time stamp for THP. We are wondering if this is a config issue and if there is a specific config that you use for the retweet with these models that you could share? we would like to verify retweet results on all the provided models.
Also wondering how you tackle predicting long horizon events for datasets with highly variable sequence lengths such as retweet and how extra padding at prediction time will affect the predicted results?
The text was updated successfully, but these errors were encountered: