We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

NLPeasy - a Workflow to Analyse, Enrich, and Explore Textual Data

Formal Metadata

Title
NLPeasy - a Workflow to Analyse, Enrich, and Explore Textual Data
Subtitle
Use pre-trained NLP-models, ingest into Elastic Search, and enjoy auto-generated Kibana dashboards!
Title of Series
Number of Parts
130
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Ever wanted to try out NLP methods but it felt it too cumbersome to set up a workflow for textual data? How to enrich your data based on textual features and explore the results? NLPeasy (https://github.com/d-one/NLPeasy) does that: Enrich the data using well-known pre-trained models (Word embeddings, Sentiment Analysics, POS, Dependency Parsing). Then start the Elastic Stack on your Docker. Set-up indices and ingest it in bulk. And finally generate Kibana dashboards to explore the results. Complicated? Not at all! Just do it in a simple Jupyter Notebook. In this presentation we will give an architecture overview of the different components and demonstrate the capabilities of this Python package.