-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implementation details about textmodel_svm ? #47
Comments
@kbenoit for instance I see here https://github.com/cran/quanteda.textmodels/blob/a1c52468a8004e9c8a23b67eee9584677f2dab71/tests/testthat/test-textmodel_svm.R that you check that the coefficients should be equal to
How do you know that? The IR example only deals with naive bayes. Thanks again! |
actually @kbenoit @koheiw by looking at the manual https://cran.r-project.org/web/packages/LiblineaR/LiblineaR.pdf it seems the default Thanks again! |
Yes we realised this recently... See #45. Easily overridden through Documentation is available in the references to |
thanks @kbenoit, I saw the docs but I was curious to understand where do you get the coefficients Are these the values computed in another textbook example and you are simply verifying that Thanks! |
I think they came from running the code outside of the quanteda structure, so we are verifying it against running a non-quanteda version of the model with the same model data. Not a very strong test, but does check whether something went amiss in our wrapper. Would be delighted for more critical tests or feedback, if you have it. |
I am looking for some interesting docs. By the way, out of curiosity, do you know how does |
That's in the paper describing the method, but for multinomial logistic regression (of which the penalised approach is a special version), these are equivalent. The standard way is to compute this as per the last equation in https://en.wikipedia.org/wiki/Multinomial_logistic_regression#As_a_set_of_independent_binary_regressions. |
Hello there!
I hope all is well during these difficult times! I was playing with the great
quanteda
and discovered the nicetextmodel_svm
classification model. However, contrary totextmodel_nb
where there is a little example which reproduces juravsky's toy case, I cannot find anything abouttextmodel_svm
.Are any additional details available about this function (a quanteda tutorial, a toy example, etc)? What is happening under the hood when using
textmodel_svm
withdfm
s? Can we get back the coefficients for each token?Thanks!
The text was updated successfully, but these errors were encountered: