winkNLP's English lite language model for Web Browsers
This is a pre-trained English language model for the winkjs NLP package — winkNLP. It is compatible with browserify — easily create a bundle that you can serve up to the web browser in a single <script>
tag or even build a mobile apps. Its gzipped size is ~1MB.
It is an open-source language model, released under the MIT license.
It contains models for the following NLP tasks:
- Tokenization
- Token's Feature Extraction
- Sentence Boundary Detection
- Negation Handling
- POS tagging
- Automatic mapping of British spellings to American
- Named Entity Recognition
- Sentiment Analysis
- Custom Entities Definition
- Stemming using Porter Stemmer Algorithm V2
- Lemmatization
- Readability statistics computation
It is a derivative of wink-eng-lite-model and also supports Typescript.
It requires Node.js version 16.0.0
or above. The compatible browsers are listed here.
The model must be installed along with the wink-nlp:
# Install wink-nlp
npm install wink-nlp --save
# Install wink-eng-lite-web-model
npm install wink-eng-lite-web-model --save
We start by requiring the wink-nlp package and the wink-eng-lite-web-model. Then we instantiate wink-nlp using the language model:
// Load "wink-nlp" package.
const winkNLP = require('wink-nlp');
// Load english language model — light version.
const model = require('wink-eng-lite-web-model');
// Instantiate wink-nlp.
const nlp = winkNLP(model);
// Code for Hello World!
var text = 'Hello World!';
var doc = nlp.readDoc(text);
console.log(doc.out());
// -> Hello World!
Learn how to use this model with winkNLP from the following resources:
- Overview — introduction to winkNLP.
- Concepts — everything you need to know to get started.
- API Reference — explains usage of APIs with examples.
The model supports following NLP tasks — tokenization, sentence boundary detection, negation handling, sentiment analysis, part-of-speech tagging, and named entity extraction.
While it is trained to process English language text, it can tokenize text containing other languages such as Hindi, French and German. Such tokens are tagged as X (foreign word) during pos tagging.
The model follows the Universal POS tags standards. It delivers an accuracy of ~95% on a subset of WSJ corpus — this includes tokenization of raw text prior to pos tagging.
The model is trained to detect CARDINAL, DATE, DURATION, EMAIL, EMOJI, EMOTICON, HASHTAG, MENTION, MONEY, ORDINAL, PERCENT, TIME, and URL.
It delivers a f-score of ~84.5%, when validated using Amazon Product Review Sentiment Labelled Sentences Data Set at UCI Machine Learning Repository.
The model is contained in the standard JSON format. Apart from the data, there is a tiny fraction of JS glue code, which is primarily used during model loading.
If you spot a bug and the same has not yet been reported, raise a new issue.
Wink is a family of open source packages for Natural Language Processing, Machine Learning and Statistical Analysis in NodeJS. The code is thoroughly documented for easy human comprehension and has a test coverage of ~100% for reliability to build production grade solutions.
The wink-eng-lite-web-model is copyright 2020-24 of GRAYPE Systems Private Limited.
It is licensed under the terms of the MIT License.