We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Bringing monitoring into the 21st century

Formal Metadata

Title
Bringing monitoring into the 21st century
Title of Series
Number of Parts
84
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language
Production Year2012

Content Metadata

Subject Area
Genre
Abstract
We stumbled upon ancient (circa 1960) scrolls of wisdom in the field of statistics and applied it to modern day monitoring systems. The result is a monitoring system that detects anomalies instead of relying on statically defined thresholds as well as predicts failures long before they become problems. There are many, many things wrong with the typical, free monitoring solutions available today. - They rely on probing rather than look at actual data generated by real users of the monitored system/service. - Their definition of a failure is simply some value exceeding a statically defined threshold. - They don't tell you something is wrong until it's too late. In this talk, I'll present an alternative. * It prefers passive monitoring to gather data about the experience real users have with the system/service, * It detects any anomaly. Unexpected low values can be indicative of problems, too! * It uses various statistical methods to predict problems before they occur. It's easy to imagine a file system that fills up another 5% every month and it's easy to predict how full it'll be in 5 months. It's more complicated (but certainly doable!) to make an automatically computed, educated guess as to whether you'll be able to handle the annual spike in demand on your webshop this holiday season based on the load over the preceding months.