We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

How do you scale a logging infrastructure to accept a billion messages a day

Formal Metadata

Title
How do you scale a logging infrastructure to accept a billion messages a day
Title of Series
Number of Parts
163
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Over the past year, OpenTable have been re-architecting their system from a monolithic architecture to move more towards microservices and small applications. As the infrastructure has changed, so to the logging infrastructure has had to change. Originally we had a logging solution where all logs where based in SQL Server. We then adopted the ELK stack. This allowed us to be able to scale more. As the company moved into the cloud, we had to be able to scale even more. We decided that we would move to Apache Kafka. This talk is all about the steps we went through and how we approached the work as well as the lessons learned. We are now in a position where we are able to elastically scale our logging infrastructure easily. At the end of this talk, Paul will have demonstrated why Apache Kafka is a perfect addition to the ELK stack and how Apache Kafka allows us to add more resiliency and redundancy into our logging infrastructure.