We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

The Worst Day of Your Life

Formal Metadata

Title
The Worst Day of Your Life
Title of Series
Number of Parts
31
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language
Production PlaceOttawa, Canada

Content Metadata

Subject Area
Genre
Abstract
Recovering from Crises and Disasters What do you do when the worst happens? It could be a catastrophic hardware (or even data center) failure, a badly placed rm -rf *, or a PostgreSQL bug. We'll discuss how to recover from disasters that are far outside the usual operating procedure... and how to avoid getting into them in the first place. Every DBA with real-life experience knows that sinking feeling when you realize that something terrible has happened: PostgreSQL crashes with a PANIC message, you realize you were on the production system when you dropped that table, or you get a status update that "us-east is currently experiencing problems." What do you do? There's no single solution to catastrophic problems, but we can talk about strategies that might help you keep a cool head while everything around you is losing theirs. We'll talk about things like: Dealing with PostgreSQL bugs. Catastrophic hardware failures. Application and operator error. And, of course, we'll discuss what you need to do in advance to make the Worst Day of Your Life a little bit less traumatic: Backup and recovery strategies and tradeoffs. Upgrade procedures. Planning for business continuity in major disasters.