We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Let's Do Data Lineage in Kafka, Flink and Druid!

Formal Metadata

Title
Let's Do Data Lineage in Kafka, Flink and Druid!
Title of Series
Number of Parts
64
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Data lineage means you can track the data bits in your system and know at any time where they come from and how exactly they have been processed. Enterprise systems need to be able to prove lineage for compliance reasons, but in general, lineage is also a big part of data discoverability and governance. In this talk, I am going to connect a few Raspberry Pi's that collect ADS-B (aircraft radar) data to a KFD (Kafka-Flink-Druid) stack for analytical processing. I will deliver the data through Kafka, cleanse and enrich them with Flink, and run analytical queries on the result with Druid. I am going to track data lineage through Kafka metadata, and I am going to show how that information can be maintained throughout the processing pipeline. This relies on using Kafka headers, an underused feature of Kafka that also integrates readily and easily with Druid! You will learn how data lineage can be implemented using the open source KFD stack and readily available data sources, so you, too, can try out enterprise style data lineage processing, and prepare yourself at home for a question that will arise in any enterprise data engineering project!