HomeTechnologyHow to deploy natural language processing (NLP): Getting initiated

How to deploy natural language processing (NLP): Getting initiated

The release of the Elastic Stack 8.0 introduced the ability to upload PyTorch machine learning models into Elasticsearch to provide modern natural language processing (NLP) in the Elastic Stack. NLP opens up opportunities to extract information, classify text, and provide better search relevance through dense vectors and approximate nearest neighbor search.

In this multi-part blog series, we will walk through end-to-end examples using a variety of PyTorch NLP models.

Part 1: Getting initiated with NLP models
Part 2: Named entity recognition (NER)
Part 3: Sentiment analysis

In each example we will use a prebuilt NLP model from the Hugging Face model hub. Then we will follow the Elastic documented instructions for deploying an NLP model and adding NLP inference to an ingest pipeline.  Because it’s always a fine thought to start with a defined use case and an understanding of the text data to process in the model, we’ll start by defining the objective for using NLP and a shared data set for anyone to try out.

To prepare for the NLP example, we will need an Elasticsearch cluster operating at least version 8.0, an ML node with at least 2GB RAM, and for the named entity recognition (NER) example we’ll use the required mapper-annotated-text plugin. One of the easiest ways to get initiated is by following alongside with these NLP examples with your own free 14-day trial cluster on Elastic Cloud. Cloud trials are able to scale to a max of 2 2GB ML nodes, which will allow you to deploy 1 or 2 of examples at any 1 time in this multi-part blog series.


Most Popular