You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given that it is SENTENCE classification, you can't really "highlight" one part that makes a piece of text "toxic"....
The only thing that I can remotely think of is to process each word in a submission individually to find a "toxic" word - but this is really inefficient, and not what the model is suited for, it's not just looking at a word or phrase.....
Hi,
Thanks for the work on this library, it's quite accurate!
I'd be awesome if the model could pinpoint the aspect of the input text that triggered a high level (of toxicity or any other measured field).
Is there any easy way to do it already, maybe not for all cases, but for the obvious ones?
The text was updated successfully, but these errors were encountered: