We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Leveraging Linked Data using Python and SPARQL

Formal Metadata

Title
Leveraging Linked Data using Python and SPARQL
Title of Series
Number of Parts
115
Author
Contributors
License
CC Attribution - NonCommercial - ShareAlike 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Wikipedia is the digital encyclopedia that we use daily to find out facts and information. What could be better than being able to extract the extreme wealth of crowd-sourced knowledge from Wikipedia without using traditional web scrapers? Various community-driven projects extract knowledge from Wikipedia and stores them structurally, retrievable using SPARQL. It can be used to mine data for a range of Data Science projects. In this talk, I will walk through the basics of the Open Web and how to use Python to use this huge open database. The agenda includes the following: • Why Wikipedia? • Introduction to DBpedia and Wikidata • Introduction to Linked Data • How to query DBpedia/WikiData o Build SPARQL Query o Use Python’s SPARQLWrapper • Python Code Walkthrough to create o A Tabular Dataset using SPARQL o A Corpus for Language Models using Wikipedia and BeautifulSoup o An Use-Case leveraging both SPARQLWrapper and Wikipedia to Create Domain-Specific Corpus Prerequisites – Basic knowledge of Python programming, Natural Language Processing, and SQL