Image Generated with DALL·E 3
NLP, or Natural Language Processing, is a field within Artificial Intelligence that focuses on the interaction between human language and computers. It tries to explore and apply text data so computers can understand the text meaningfully.
As the NLP field research progresses, how we process text data in computers has evolved. Modern times, we have used Python to help explore and process data easily.
With Python becoming the go-to language for exploring text data, many libraries have been developed specifically for the NLP field. In this article, we will explore various incredible and useful NLP libraries.
So, let’s get into it.
NLTK
NLTK, or Natural Language Tool Kit, is an NLP Python library with many text-processing APIs and industrial-grade wrappers. It’s one of the biggest NLP Python libraries used by researchers, data scientists, engineers, and others. It’s a standard NLP Python library for NLP tasks.
Let’s try to explore what NLTK could do. First, we would need to install the library with the following code.
Next, we would see what NLTK could do. First, NLTK can perform the tokenization process using the following code:
import nltk from nltk.tokenize
import word_tokenize
# Download the necessary resources
nltk.download('punkt')
text = "The fruit in the table is a banana"
tokens = word_tokenize(text)
print(tokens)
Output>>
['The', 'fruit', 'in', 'the', 'table', 'is', 'a', 'banana']
Tokenization basically would divide each word in a sentence into individual data.
With NLTK, we can also perform Part-of-Speech (POS) Tags on the text sample.
from nltk.tag import pos_tag
nltk.download('averaged_perceptron_tagger')
text = "The fruit in the table is a banana"
pos_tags = pos_tag(tokens)
print(pos_tags)
Output>>
[('The', 'DT'), ('fruit', 'NN'), ('in', 'IN'), ('the', 'DT'), ('table', 'NN'), ('is', 'VBZ'), ('a', 'DT'), ('banana', 'NN')]
The output of the POS tagger with NLTK is each token and its intended POS tags. For example, the word Fruit is Noun (NN), and the word ‘a’ is Determinant (DT).
It’s also possible to perform Stemming and Lemmatization with NLTK. Stemming is reducing a word to its base form by cutting its prefixes and suffixes, while Lemmatization also transforms to the base form by considering the words’ POS and morphological analysis.
from nltk.stem import PorterStemmer, WordNetLemmatizer
nltk.download('wordnet')
nltk.download('punkt')
text = "The striped bats are hanging on their feet for best"
tokens = word_tokenize(text)
# Stemming
stemmer = PorterStemmer()
stems = [stemmer.stem(token) for token in tokens]
print("Stems:", stems)
# Lemmatization
lemmatizer = WordNetLemmatizer()
lemmas = [lemmatizer.lemmatize(token) for token in tokens]
print("Lemmas:", lemmas)
Output>>
Stems: ['the', 'stripe', 'bat', 'are', 'hang', 'on', 'their', 'feet', 'for', 'best']
Lemmas: ['The', 'striped', 'bat', 'are', 'hanging', 'on', 'their', 'foot', 'for', 'best']
You can see that the stemming and lentmatization processes have slightly different results from the words.
That’s the simple usage of NLTK. You can still do many things with them, but the above APIs are the most commonly used.
SpaCy
SpaCy is an NLP Python library that is designed specifically for production use. It’s an advanced library, and SpaCy is known for its performance and ability to handle large amounts of text data. It’s a preferable library for industry use in many NLP cases.
To install SpaCy, you can look at their usage page. Depending on your requirements, there are many combinations to choose from.
Let’s try using SpaCy for the NLP task. First, we would try performing Named Entity Recognition (NER) with the library. NER is a process of identifying and classifying named entities in text into predefined categories, such as person, address, location, and more.
import spacy
nlp = spacy.load("en_core_web_sm")
text = "Brad is working in the U.K. Startup called AIForLife for 7 Months."
doc = nlp(text)
#Perform the NER
for ent in doc.ents:
print(ent.text, ent.label_)
Output>>
Brad PERSON
the U.K. Startup ORG
7 Months DATE
As you can see, the SpaCy pre-trained model understands which word within the document can be classified.
Next, we can use SpaCy to perform Dependency Parsing and visualize them. Dependency Parsing is a process of understanding how each word relates to the other by forming a tree structure.
import spacy
from spacy import displacy
nlp = spacy.load("en_core_web_sm")
text = "SpaCy excels at dependency parsing."
doc = nlp(text)
for token in doc:
print(f"{token.text}: {token.dep_}, {token.head.text}")
displacy.render(doc, jupyter=True)
Output>>
Brad: nsubj, working
is: aux, working
working: ROOT, working
in: prep, working
the: det, Startup
U.K.: compound, Startup
Startup: pobj, in
called: advcl, working
AIForLife: oprd, called
for: prep, called
7: nummod, Months
Months: pobj, for
.: punct, working
The output should include all the words with their POS and where they are related. The code above would also provide tree visualization in your Jupyter Notebook.
Lastly, let’s try performing text similarity with SpaCy. Text similarity measures how similar or related two pieces of text are. It has many techniques and measurements, but we will try the simplest one.
import spacy
nlp = spacy.load("en_core_web_sm")
doc1 = nlp("I like pizza")
doc2 = nlp("I love hamburger")
# Calculate similarity
similarity = doc1.similarity(doc2)
print("Similarity:", similarity)
Output>>
Similarity: 0.6159097609586724
The similarity measure measures the similarity between texts by providing an output score, usually between 0 and 1. The closer the score is to 1, the more similar both texts are.
There are still many things you can do with SpaCy. Explore the documentation to find something useful for your work.
TextBlob
TextBlob is an NLP Python library for processing textual data built on top of NLTK. It simplifies many of NLTK’s usage and can streamline text processing tasks.
You can install TextBlob using the following code:
pip install -U textblob
python -m textblob.download_corpora
First, let’s try to use TextBlob for NLP tasks. The first one we would try is to do sentiment analysis with TextBlob. We can do that with the code below.
from textblob import TextBlob
text = "I am in the top of the world"
blob = TextBlob(text)
sentiment = blob.sentiment
print(sentiment)
Output>>
Sentiment(polarity=0.5, subjectivity=0.5)
The output is a polarity and subjectivity score. Polarity is the sentiment of the text where the score ranges from -1 (negative) to 1 (positive). At the same time, the subjectivity score ranges from 0 (objective) to 1 (subjective).
We can also use TextBlob for text correction tasks. You can do that with the following code.
from textblob import TextBlob
text = "I havv goood speling."
blob = TextBlob(text)
# Spelling Correction
corrected_blob = blob.correct()
print("Corrected Text:", corrected_blob)
Output>>
Corrected Text: I have good spelling.
Try to explore the TextBlob packages to find the APIs for your text tasks.
Gensim
Gensim is an open-source Python NLP library specializing in topic modeling and document similarity analysis, especially for big and streaming data. It focuses more on industrial real-time applications.
Let’s try the library. First, we can install them using the following code:
After the installation is finished, we can try the Gensim capability. Let’s try to do topic modeling with LDA using Gensim.
import gensim
from gensim import corpora
from gensim.models import LdaModel
# Sample documents
documents = [
"Tennis is my favorite sport to play.",
"Football is a popular competition in certain country.",
"There are many athletes currently training for the olympic."
]
# Preprocess documents
texts = [[word for word in document.lower().split()] for document in documents]
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
#The LDA model
lda_model = LdaModel(corpus, num_topics=2, id2word=dictionary, passes=15)
topics = lda_model.print_topics()
for topic in topics:
print(topic)
Output>>
(0, '0.073*"there" + 0.073*"currently" + 0.073*"olympic." + 0.073*"the" + 0.073*"athletes" + 0.073*"for" + 0.073*"training" + 0.073*"many" + 0.073*"are" + 0.025*"is"')
(1, '0.094*"is" + 0.057*"football" + 0.057*"certain" + 0.057*"popular" + 0.057*"a" + 0.057*"competition" + 0.057*"country." + 0.057*"in" + 0.057*"favorite" + 0.057*"tennis"')
The output is a combination of words from the document samples that cohesively become a topic. You can evaluate whether the result makes sense or not.
Gensim also provides a way for users to embed content. For example, we use Word2Vec to create embedding from words.
import gensim
from gensim.models import Word2Vec
# Sample sentences
sentences = [
['machine', 'learning'],
['deep', 'learning', 'models'],
['natural', 'language', 'processing']
]
# Train Word2Vec model
model = Word2Vec(sentences, vector_size=20, window=5, min_count=1, workers=4)
vector = model.wv['machine']
print(vector)
Output>>
[ 0.01174188 -0.02259516 0.04194366 -0.04929082 0.0338232 0.01457208
-0.02466416 0.02199094 -0.00869787 0.03355692 0.04982425 -0.02181222
-0.00299669 -0.02847819 0.01925411 0.01393313 0.03445538 0.03050548
0.04769249 0.04636709]
There are still many applications you can use with Gensim. Try to see the documentation and evaluate your needs.
Conclusion
In this article, we explored several Python NLP libraries essential for many text tasks. All of these libraries would be useful for your work, from Text Tokenization to Word Embedding. The libraries we are discussing are:
- NLTK
- SpaCy
- TextBlob
- Gensim
I hope it helps
Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and data tips via social media and writing media. Cornellius writes on a variety of AI and machine learning topics.