We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Billion Tables Project (BTP)

Formal Metadata

Title
Billion Tables Project (BTP)
Alternative Title
The Billion Tables Project
Title of Series
Number of Parts
25
Author
Contributors
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date2013
LanguageEnglish
Production PlaceOttawa, Canada

Content Metadata

Subject Area
Genre
Abstract
Aka how long a "\dt" takes on a 1B tables database Usually "large" databases are considered as such for the high number of records they hold, reaching billions or even more than that. But what about creating a billion... tables? Sometime ago, this apparently crazy question was found in a database soup. It may not be your day-to-day task, but the task of creating them exposes some topics about PostgreSQL internals, performance and large databases that may be really worth for your day-to-day. Join us for this talk, where we'll be discussing topics such as catalogue structure and storage requirements, table speed creation, differences between PostgreSQL versions and durability vs. table creation speed tradeoffs, among others. And, of course, how long a "\dt" takes on a 1B tables database :) This talk will explore all the steps taken to achieve such a result, raising questions on topics such as: The catalogue structure and its storage requirements. Table creation speed. Durability tradeoffs to achieve the desired goal. Strategy to be able to create the 1B tables. Scripts / programs used. How the database behaves under such a high table count. Differences in table creation speed and other shortcuts between different PostgreSQL versions. How the storage media and database memory affects the table creation speed and the feasibility of the task. If it makes sense to have such a database. It is intended to be a funny, open talk, for a beginner to medium level audience, interested in large databases, performance and PostgreSQL internals.