You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, congratulation on an amazing work... I understand having large characters with a number of parses more than 9 can make the factorial huge... do you have any suggestions on how to incorporate large characters?
The text was updated successfully, but these errors were encountered:
Hi Souank - thank you for the kind remarks and for your interest in our work.
The use of random search during inference (step 2 in Appendix B) was a fill-in, and there are a number of better alternatives for large characters. With an auto-regressive model like ours—where P(stroke1, stroke2, stroke3) = P(stroke1)P(stroke2 | stroke1)P(stroke3 | stroke1, stroke2)—one option is to use "greedy" search, where you select parses based on P(stroke1), then refine the selection based on P(stroke2 | stroke1), etc. Beyond greedy search, a better alternative would be to use MCMC or particle filtering so that you are keeping some of the lower-scoring candidates with nonzero probability for future evaluations. Hope this is helpful, but I'd be happy to clarify further.
Hi, congratulation on an amazing work... I understand having large characters with a number of parses more than 9 can make the factorial huge... do you have any suggestions on how to incorporate large characters?
The text was updated successfully, but these errors were encountered: