We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Building a new Big Data distribution based on Kubernetes with a twist!

Formal Metadata

Title
Building a new Big Data distribution based on Kubernetes with a twist!
Title of Series
Number of Parts
69
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
The need for companies to deploy and operate Big Data infrastructures hasn't gone away but their options to do so have dwindled in the past few years. That's why we decided to build a new Open Source Big Data distribution. It includes the usual suspects like Apache Kafka, Apache Spark, Apache NiFi, etc. We asked around and were told it's a crazy idea but we did it anyway: We implemented a Kubelet in Rust that uses systemd as its backend instead of a container runtime. We also started writing Operators that target these special kubelets. This means we can deploy hybrid infrastructure (partly running in containers and partly on "bare metal") using the same stack, the same tools, the same description languages, the same knowledge, etc. getting the best of both worlds. In this talk we'll share what we learned about writing Kubernetes Operators (in Rust) as well as gain insights into our new distribution.