The main idea is to implement and compare the performances of the ML models that can be used to identify offensive texts and find the most suitable model to implement for a web application.
- Python
- Flask
- HTML and CSS
- JavaScript
- Naive Bayes Classification
- Accuracy: 71%
- Excecution time: 0.01s
- Long Short-term Memory (LSTM)
- Accuracy: 75%
- Execution time: 0.28s
For further explanation about the project, you can read our research paper here