Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issues on dst #5

Open
smartyfh opened this issue Sep 10, 2020 · 7 comments
Open

issues on dst #5

smartyfh opened this issue Sep 10, 2020 · 7 comments

Comments

@smartyfh
Copy link

Hi,

when evaluating the JGA for DST, did you remove both the none slot and dontcare slot?

When I ran the dialogue_generation.py, it seems that the generated belief states are always empty in the MODEL_OUTPUT file. so could you please provide more details about how the model is trained for DST?

Thanks!

@ehosseiniasl
Copy link
Contributor

Hi,

We are only removing none. it is fixed now.
On dialogue_generation.py, this seems a parsing issue. We will fix this

@fasterbuild
Copy link

fasterbuild commented Sep 16, 2020

@smartyfh empty belief issue: in generate_dialogue.py file, add text=text.strip() before tokenizer.encode(text), or there is always an space at the end. However I didn't get the joint acc in the paper, will you?

@smartyfh
Copy link
Author

@smartyfh empty belief issue: in generate_dialogue.py file, add text=text.strip() before tokenizer.encode(text), or there is always an space at the end. However I didn't get the joint acc in the paper, will you?

@fasterbuild Have you checked all the checkpoints or just one checkpoint? If ignoring both none and dontcare slots, the results should be reproducible. However, if keeping the dontcare slots, the acc would go down several points. But this needs the author to confirm.

@gungui98
Copy link

@smartyfh I am struggling in reproducing the result, could you please share the hyper-params you use for the training?

@ShaneTian
Copy link

@smartyfh empty belief issue: in generate_dialogue.py file, add text=text.strip() before tokenizer.encode(text), or there is always an space at the end. However I didn't get the joint acc in the paper, will you?

@fasterbuild Have you checked all the checkpoints or just one checkpoint? If ignoring both none and dontcare slots, the results should be reproducible. However, if keeping the dontcare slots, the acc would go down several points. But this needs the author to confirm.

If keeping the dontcare slots, what is the JGA you get?

@HuangLK
Copy link

HuangLK commented Nov 27, 2020

@smartyfh empty belief issue: in generate_dialogue.py file, add text=text.strip() before tokenizer.encode(text), or there is always an space at the end. However I didn't get the joint acc in the paper, will you?

@fasterbuild Have you checked all the checkpoints or just one checkpoint? If ignoring both none and dontcare slots, the results should be reproducible. However, if keeping the dontcare slots, the acc would go down several points. But this needs the author to confirm.

If keeping the dontcare slots, what is the JGA you get?

The JGA is 50.32% if keeping the dontcare. After ignoring both none and dontcare, I can achieve 55.45% JGA.

@libing125
Copy link

I got 50.46 joint accuracy, keeping dontcare and doing default cleaning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants