Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training likelihood decreasing with iterations #3

Open
david-cortes opened this issue Aug 7, 2018 · 0 comments
Open

Training likelihood decreasing with iterations #3

david-cortes opened this issue Aug 7, 2018 · 0 comments

Comments

@david-cortes
Copy link

david-cortes commented Aug 7, 2018

If I pass a validation file which is a copy of the train file and look at the numbers in validation.txt, I see that at some point the log-likelihood starts decreasing (moving away from zero) rather than increasing.

I’m not an expert in variational inference, but, if doing full-batch updates in which each parameter is set to its expected value given the other variables, shouldn’t the training likelihood be monotonically increasing with respect to the number of iterations?

The dataset is under this link:
https://drive.google.com/open?id=1FzBzQnGU3bQ3ojLIGy9Hby9A6Tcun-JQ

Called with the following parameters:
collabtm -dir path_to_data -nusers 191770 -ndocs 119448 -nvocab 342260 -k 100

Log-likelihood shows a decrease at iterations 60 and 70, after which it stops.

0	121	-14.392807046	434084
10	1331	-13.920642836	434084
20	2543	-12.258906021	434084
30	3767	-12.187407095	434084
40	4989	-12.173852715	434084
50	6210	-12.170551069	434084
60	7458	-12.172230009	434084
70	8680	-12.180070428	434084
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant