Toxic comments are becoming a major challenge to have meaningful online discussions.
This plugin uses a pre-trained toxic classifier from TensorFlow to classify a comment as toxic. See more technical details on the quality of the model here.
Once a comment is flagged as toxic, the comment is blocked and the plugin alerts the comment author and asks to modify the text before trying again.
In the default Settings-Discussion page you can enable the detection of toxic comments and define the threshold confidence level for the prediction.
Configuration settings for the plugin
Example of a blocked comment
How did you train the predictive model?
We didn’t. We are using a pre-trained model provided by tensorflow itself.
Can I improve or personalize the prediction by manually training the toxicity classifier on my site?
No. The classifier is pre-trained. But you could build your own classifier based on the [code to create and train] (https://github.com/conversationai/conversationai-models/tree/master/experiments) this one
The plugin relies on tensorflow.js to analyze the comment on the browser. Therefore, the plugin enqueues tensorflow, the sentence encoder and the toxicity model.
Nevertheless, the JS code to execute the actual comment classification is only added to single post pages with comments (and the toxicity settings) enabled.
Install and Activate the plugin through the ‘Plugins’ menu in WordPress
- Initial release